scispace - formally typeset
Search or ask a question

Showing papers by "Cadence Design Systems published in 2014"


Proceedings ArticleDOI
24 Mar 2014
TL;DR: This work proposes to achieve ASIC design obfuscation based on embedded reconfigurable logic which is determined by the end user and unknown to any party in the supply chain to severely limit a supply chain adversary's ability to subvert a VLSI system with back doors or logic bombs.
Abstract: Hardware is the foundation and the root of trust of any security system. However, in today's global IC industry, an IP provider, an IC design house, a CAD company, or a foundry may subvert a VLSI system with back doors or logic bombs. Such a supply chain adversary's capability is rooted in his knowledge on the hardware design. Successful hardware design obfuscation would severely limit a supply chain adversary's capability if not preventing all supply chain attacks. However, not all designs are obfuscatable in traditional technologies. We propose to achieve ASIC design obfuscation based on embedded reconfigurable logic which is determined by the end user and unknown to any party in the supply chain. Combined with other security techniques, embedded reconfigurable logic can provide the root of ASIC design obfuscation, data confidentiality and tamper-proofness. As a case study, we evaluate hardware-based code injection attacks and reconfiguration-based instruction set obfuscation based on an open source SPARC processor LEON2. We prevent program monitor Trojan attacks and increase the area of a minimum code injection Trojan with a 1KB ROM by 2.38% for every 1% area increase of the LEON2 processor.

87 citations


Journal ArticleDOI
01 Oct 2014
TL;DR: The core of the approach is a non-trivial, lattice-theoretic generalisation of the conflict-driven clause learning algorithm in modern sat solvers to lattice -based abstractions, which allows for directly handling arithmetic and is more efficient than encoding a formula as a bit-vector as in current floating-point solvers.
Abstract: We present a bit-precise decision procedure for the theory of floating-point arithmetic. The core of our approach is a non-trivial, lattice-theoretic generalisation of the conflict-driven clause learning algorithm in modern sat solvers to lattice-based abstractions. We use floating-point intervals to reason about the ranges of variables, which allows us to directly handle arithmetic and is more efficient than encoding a formula as a bit-vector as in current floating-point solvers. Interval reasoning alone is incomplete, and we obtain completeness by developing a conflict analysis algorithm that reasons natively about intervals. We have implemented this method in the mathsat5 smt solver and evaluated it on assertion checking problems that bound the values of program variables. Our new technique is faster than a bit-vector encoding approach on 80 % of the benchmarks, and is faster by one order of magnitude or more on 60 % of the benchmarks. The generalisation of cdcl we propose is widely applicable and can be used to derive abstraction-based smt solvers for other theories.

66 citations


Journal ArticleDOI
TL;DR: A horizon‐scanning method is developed that assigns genomic applications to tiers defined by availability of synthesized evidence, and an application of the method is illustrated to pharmacogenomics tests.
Abstract: As evidence accumulates on the use of genomic tests and other health-related applications of genomic technologies, decision makers may increasingly seek support in identifying which applications have sufficiently robust evidence to suggest they might be considered for action. As an interim working process to provide such support, we developed a horizon-scanning method that assigns genomic applications to tiers defined by availability of synthesized evidence. We illustrate an application of the method to pharmacogenomics tests.

53 citations


Proceedings ArticleDOI
01 Jun 2014
TL;DR: A novel density function based on electrostatics to remove overlap and Nesterov's method to minimize the nonlinear cost is used and an approximated preconditioner is proposed to resolve the difference between large macros and standard cells.
Abstract: ePlace is a generalized analytic algorithm to handle large-scale standard-cell and mixed-size placement. We use a novel density function based on electrostatics to remove overlap and Nesterov's method to minimize the nonlinear cost. Steplength is estimated as the inverse of Lipschitz constant, which is determined by our dynamic prediction and backtracking method. An approximated preconditioner is proposed to resolve the difference between large macros and standard cells, while an annealing engine is devised to handle macro legalization followed by placement of standard cells. The above innovations are integrated into our placement prototype ePlace, which outperforms the leading-edge placers on respective standard-cell and mixed-size benchmark suites. Specifically, ePlace produces 2.83%, 4.59% and 7.13% shorter wirelength while runs 3.05×, 2.84× and 1.05× faster than BonnPlace, MAPLE and NTUplace3-unified in average of ISPD 2005, ISPD 2006 and MMS circuits, respectively.

41 citations


Journal ArticleDOI
TL;DR: This paper provides an informative discussion of varied approaches to parametric yield estimation, including recently developed methods that provide a highly accurate and fast alternative to Monte Carlo methods for some types of analysis.
Abstract: Accurate yield estimation is always an important director of design. For analog/mixed signal circuits, the dominant yield loss mechanisms are parametric in nature. This paper provides an informative discussion of varied approaches to parametric yield estimation, including recently developed methods that provide a highly accurate and fast alternative to Monte Carlo methods for some types of analysis.

40 citations


Journal ArticleDOI
TL;DR: A method of treating pelvic discontinuity using porous tantalum components with a distraction technique that achieves both initial stability and subsequent long-term biological fixation is described.
Abstract: A pelvic discontinuity occurs when the superior and inferior parts of the hemi-pelvis are no longer connected, which is difficult to manage when associated with a failed total hip replacement. Chronic pelvic discontinuity is found in 0.9% to 2.1% of hip revision cases with risk factors including severe pelvic bone loss, female gender, prior pelvic radiation and rheumatoid arthritis. Common treatment options include: pelvic plating with allograft, cage reconstruction, custom triflange implants, and porous tantalum implants with modular augments. The optimal technique is dependent upon the degree of the discontinuity, the amount of available bone stock and the likelihood of achieving stable healing between the two segments. A method of treating pelvic discontinuity using porous tantalum components with a distraction technique that achieves both initial stability and subsequent long-term biological fixation is described.

39 citations


Journal ArticleDOI
TL;DR: An optimal buffer insertion algorithm is applied to the TSV-aware 3-D wirelength distribution models and various prediction results on wirelength, delay, and power consumption of3-D ICs are presented.
Abstract: 3-D integrated circuits (3-D ICs) are expected to have shorter wirelength, better performance, and less power consumption than 2-D ICs. These benefits come from die stacking and use of through-silicon vias (TSVs) fabricated for interconnections across dies. However, the use of TSVs has several negative impacts such as area and capacitance overhead. To predict the quality of 3-D ICs more accurately, TSV-aware 3-D wirelength distribution models considering the negative impacts were developed. In this paper, we apply an optimal buffer insertion algorithm to the TSV-aware 3-D wirelength distribution models and present various prediction results on wirelength, delay, and power consumption of 3-D ICs. We also apply the framework to 2-D and 3-D ICs built with various combinations of process and TSV technologies and predict the quality of today and future 3-D ICs.

30 citations


Journal ArticleDOI
TL;DR: A method for horizon scanning and 1 year data on translational research beyond bench to bedside to assess the validity, utility, implementation, and outcomes of such applications are provided.

28 citations


Proceedings ArticleDOI
30 Mar 2014
TL;DR: This session aims to highlight the importance of CPPR during timing analysis, as well as explore novel methods for fast CPPR from the top performers of the TAU 2014 timing contest.
Abstract: To margin against modeling limitations in considering design and electrical complexities (e.g., crosstalk coupling, voltage drops) as well as variability (e.g., manufacturing process, environmental), "early" and "late" signal propagation delays in static timing analysis are often made pessimistic by addition of extra guard bands. While these forced "early-late splits" provide desired margins, the splits applied across the entire design introduce excessive and undesired pessimism. To this end, "common path pessimism removal (CPPR)" eliminates the redundant pessimism during timing analysis.The aim of the TAU 2014 timing contest is to seek novel ideas for fast CPPR by: (i) introducing the concept and importance of common path pessimism removal while highlighting the exponential run-time complexity of an optimal solution, (ii) encouraging novel parallelization techniques (including multi-threading), and (iii) facilitating the creation of a timing analysis and CPPR framework with benchmarks to further advance research in this area.

27 citations


Journal ArticleDOI
TL;DR: The strongest prognostic variable for short- and long-term survival after PD for PDA is lymph node ratio, while abdominal pain on presentation, operative time, and estimated blood loss were associated with decreased survival at various time points.

25 citations


Journal ArticleDOI
TL;DR: A novel technique aimed at mitigating the opportunity cost of investing die real estate on accelerators by allowing GP-CPU cores to reuse accelerator memory as a non-uniform cache architecture (NUCA) substrate.
Abstract: Accelerators integrated on-die with General-Purpose CPUs (GP-CPUs) can yield significant performance and power improvements. Their extensive use, however, is ultimately limited by their area overhead; due to their high degree of specialization, the opportunity cost of investing die real estate on accelerators can become prohibitive, especially for general-purpose architectures. In this paper we present a novel technique aimed at mitigating this opportunity cost by allowing GP-CPU cores to reuse accelerator memory as a non-uniform cache architecture (NUCA) substrate. On a system with a last level-2 cache of 128kB, our technique achieves on average a 25% performance improvement when reusing four 512 kB accelerator memory blocks to form a level-3 cache. Making these blocks reusable as NUCA slices incurs on average in a 1.89% area overhead with respect to equally-sized ad hoc cache slices.

Patent
06 Jun 2014
TL;DR: In this paper, the authors use connectivity information or model, design attribute, and system intelligence layer(s) to make lower blocks at lower levels aware of changes made in other blocks at same or different levels to implement the design at different levels synchronously.
Abstract: Various embodiments use connectivity information or model(s), design attribute(s), and system intelligence layer(s) to make lower blocks at lower levels aware of changes made in other blocks at same or different levels to implement the design at different levels synchronously. Budgeting is performed for the design to distribute budgets to respective blocks in the design. The various budgets may be borrowed from one or more blocks and lent to a block in order for this block to meet closure requirements such that a total number of iterations of the reassembly process, which integrates lower level blocks into top level design, may be reduced or completely eliminated. The design attribute(s) or the connectivity model(s) or information is updated upon the identification of changes to provide the latest information or data for properly closing a design.

Patent
30 May 2014
TL;DR: In this paper, a method for interconnecting circuit components with track patterns is described, where a source pin is identified on a first track and a destination pin on a second track and determines a third track in a different routing direction based on design rules governing track patterns.
Abstract: Methods and systems for interconnecting circuit components with track patterns are disclosed. The method identifies a source pin on a first track and a destination pin on a second track and determines a third track in a different routing direction based on design rules governing track patterns. The method further determines a transition pattern for the interconnection between the source pin and the destination pin by using at least the third track. The method may use one or more dummy pins or ordering of pin connections in implementing the interconnection to satisfy certain design rules. The lengths of some wire segments of the interconnection may be further adjusted to satisfy certain design rules. Compaction may be performed to have two wire segments share the same track while the lengths or widths of one or both wire segments may be further modified to ensure design rule compliance.

Proceedings ArticleDOI
01 Nov 2014
TL;DR: In this article, the authors highlight the importance of CPPR during timing analysis, as well as explore novel methods for fast CPPR from the top performers of the TAU 2014 timing contest.
Abstract: To protect against modeling limitations in considering design and electrical complexities, as well as variability, early and late signal propagation times in static timing analysis are often made pessimistic by addition of extra guard bands. However, these forced early-late splits introduce excessive and undesired pessimism. To this end, common path pessimism removal (CPPR) eliminates guaranteed redundant pessimism during timing analysis. This session aims to highlight the importance of CPPR during timing analysis, as well as explore novel methods for fast CPPR from the top performers of the TAU 2014 timing contest.

Patent
22 Jul 2014
TL;DR: In this article, the insertion of at least one device, and optionally chains of devices, into a pre-existing chain of interconnected devices within a graphical representation of a circuit design such as a circuit layout, circuit mask, or a schematic.
Abstract: The subject system and method are generally directed to the user-friendly insertion of at least one device, and optionally chains of devices, into at least one pre-existing chain of interconnected devices within a graphical representation of a circuit design such as a circuit layout, circuit mask, or a schematic. The system and method provide for discerning the intended insertion points and performing remedial transformations of the devices within the chains to ensure compliance with both structural and operational requirements of the circuit design.

Proceedings ArticleDOI
TL;DR: New challenges facing place-and-route tooling are outlined, solutions to overcome these challenges are reviewed, and a manufacturing ready implementation is demonstrated.
Abstract: This paper reviews the escalation in design constraints imposed on 2 nd level wiring by multiple patterning exposure techniques in the 10NM technology node (i.e. ~45nm wiring pitch) relative to the 14NM technology node (i.e. 64nm wiring pitch). Specifically, new challenges facing place-and-route tooling are outlined, solutions to overcome these challenges are reviewed, and a manufacturing ready implementation is demonstrated.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the EM modeling can effectively capture the EM reliability of the full-chip level 3-D PDNs with MSVs, which can be hard to achieve by the traditional EM analysis based on the individual local via or the TSV.
Abstract: Electromigration (EM) in power distribution networks (PDNs) is a major reliability issue in 3-D ICs. While the EM issues of local vias and through-silicon-vias (TSV) have been studied separately, the interplay of TSVs and conventional local vias in 3-D ICs has not been well investigated. This co-design is necessary when the die-to-die vertical power delivery is done using both TSVs and local interconnects. In this paper, we model EM for PDNs of 3-D ICs with a focus on multiscale via (MSV) structure, i.e., TSVs and local vias used together for vertical power delivery. We study the impact of structure, material, and preexisting void conditions on the EM-related lifetime of our MSV structures. We also investigate the transient IR-voltage change of full-chip level 3-D PDNs with MSVs with our model. The experimental results demonstrate that our EM modeling can effectively capture the EM reliability of the full-chip level 3-D PDNs with MSVs, which can be hard to achieve by the traditional EM analysis based on the individual local via or the TSV.

Patent
31 Mar 2014
TL;DR: In this paper, a system and method for enabling the display and movement of a boundary box of an instance master inclusive of specific predetermined geometric figures, including master pins, master halo and master boundary edges, is provided.
Abstract: A system and method for enabling the display and movement of a boundary box of an instance master inclusive of specific predetermined geometric figures, including master pins, master halo and master boundary edges, is provided. The system and method provides for improved utilization of computer resources and enables users of the present invention to be able to drag and use instance master in their designs more efficiently and rapidly.

Proceedings ArticleDOI
TL;DR: The design of PDB is described, the methodology for identifying and analyzing patterns across multiple design and technology cycles, and the use of P DB to accelerate manufacturing process learning are described.
Abstract: Pattern-based approaches to physical verification, such as DRC Plus, which use a library of patterns to identify problematic 2D configurations, have been proven to be effective in capturing the concept of manufacturability where traditional DRC fails. As the industry moves to advanced technology nodes, the manufacturing process window tightens and the number of patterns continues to rapidly increase. This increase in patterns brings about challenges in identifying, organizing, and carrying forward the learning of each pattern from test chip designs to first product and then to multiple product variants. This learning includes results from printability simulation, defect scans and physical failure analysis, which are important for accelerating yield ramp. Using pattern classification technology and a relational database, GLOBALFOUNDRIES has constructed a pattern database (PDB) of more than one million potential yield detractor patterns. In PDB, 2D geometries are clustered based on similarity criteria, such as radius and edge tolerance. Each cluster is assigned a representative pattern and a unique identifier (ID). This ID is then used as a persistent reference for linking together information such as the failure mechanism of the patterns, the process condition where the pattern is likely to fail and the number of occurrences of the pattern in a design. Patterns and their associated information are used to populate DRC Plus pattern matching libraries for design-for-manufacturing (DFM) insertion into the design flow for auto-fixing and physical verification. Patterns are used in a production-ready yield learning methodology to identify and score critical hotspot patterns. Patterns are also used to select sites for process monitoring in the fab. In this paper, we describe the design of PDB, the methodology for identifying and analyzing patterns across multiple design and technology cycles, and the use of PDB to accelerate manufacturing process learning. One such analysis tracks the life cycle of a pattern from the first time it appears as a potential yield detractor until it is either fixed in the manufacturing process or stops appearing in design due to DFM techniques such as DRC Plus. Another such analysis systematically aggregates the results of a pattern to highlight potential yield detractors for further manufacturing process improvement.

Journal ArticleDOI
TL;DR: In this article, a fully integrated switch-capacitor (SC) dc-dc boost converter has been proposed, which uses a switching scheme called nonoverlapped rotational time-interleaving (NORI) which eliminates shoot-through loss as well mitigates the adverse effect of dead times between successive charging and discharging phases which results into a small ripple.
Abstract: In this paper, we propose a fully-integrated switch-capacitor (SC) dc-dc boost converter having high power efficiency, low output ripple, and high power density. It uses a switching scheme called nonoverlapped rotational time-interleaving (NORI) which eliminates shoot-through loss as well mitigates the adverse effect of dead times between successive charging and discharging phases which results into a small ripple. A basic cross-coupled voltage doubler has been adopted to implement the NORI scheme working over a wide range of switching frequencies. Dynamic adjustment of the frequency provides high power density as well as maintains high power efficiency over a wide load current range. The proposed converter has been fabricated in 0.18- μm CMOS thick gate process for 3.3 to 5.5 V conversion and output ripple not more than 0.5% of the output voltage. The converter uses only 440 pF to deliver up to 25 mA at 5.3 V regulated output. The measured peak power efficiency is 89% at 20 mA for unregulated output. With mixed mode regulations, the measured efficiency of the converter including analog blocks is 83.5% at 15 mA, while the overall efficiency is 75%. Power density of the designed converter is more than 0.85 W/mm 2 considering the capacitor area.

Journal ArticleDOI
TL;DR: This work proposes a comprehensive study for LELE-EC layout decomposition, and integer linear programming is formulated to minimize the conflict and the stitch numbers of both input layout and end-cut candidates.
Abstract: Triple patterning lithography (TPL) is one of the most promising techniques in the 14nm logic node and beyond. However, traditional LELELE type TPL technology suers from native conict and overlapping problems. Recently LELEEC process was proposed to overcome the limitations, where the third mask is used to generate the end-cuts. In this paper we propose the rst study for LELEEC layout decomposition. Conict graphs and endcut graphs are constructed to extract all the geometrical relationships of input layout and end-cut candidates. Based on these graphs, integer linear programming (ILP) is formulated to minimize the conict number and the stitch number.

Book ChapterDOI
23 Sep 2014
TL;DR: In this article, the authors introduced a dedicated authenticated encryption scheme ICEPOLE, which is suitable for high-throughput network nodes or generally any environment where specialized hardware such as FPGAs or ASICs can be used to provide high data processing rates.
Abstract: This paper introduces our dedicated authenticated encryption scheme ICEPOLE. ICEPOLE is a high-speed hardware-oriented scheme, suitable for high-throughput network nodes or generally any environment where specialized hardware such as FPGAs or ASICs can be used to provide high data processing rates. ICEPOLE-128 the primary ICEPOLE variant is very fast. On the modern FPGA device Virtex 6, a basic iterative architecture of ICEPOLE reaches 41 Gbits/s, which is over 10 times faster than the equivalent implementation of AES-128-GCM. The throughput-to-area ratio is also substantially better when compared to AES-128-GCM. We have carefully examined the security of the algorithm through a range of cryptanalytic techniques and our findings indicate that ICEPOLE offers high security level.

Journal ArticleDOI
TL;DR: The following case study discusses the complex issues involved in treating coexistent gout and infection in a prosthetic knee.
Abstract: Gout is a common arthritic condition that continues to increase in prevalence. Symptoms of gout include a rapid onset of pain, erythema, swelling, and warmth in the affected joint. These symptoms may mimic cellulitis, thrombophlebitis, and septic arthritis (); however, a definitive diagnosis can be obtained through joint aspiration and subsequent fluid analysis to assess for the presence of monosodium urate crystals. Gout can also be present after total joint replacement. Because of the similarity of symptoms to septic arthritis, the diagnosis may be missed. Gout may be present in a prosthetic knee or may coexist with septic arthritis. Therefore, analysis of knee aspirations should include cell count, gram stain, cultures, and an examination of the synovial fluid for crystals. The following case study discusses the complex issues involved in treating coexistent gout and infection in a prosthetic knee.

Patent
01 Oct 2014
TL;DR: In this article, various techniques that check, verify, or test multi-fabric designs by receiving a request for checking correctness of a multibric design across at least a first design fabric and a second design fabric are discussed.
Abstract: Disclosed are various techniques that check, verify, or test multi-fabric designs by receiving a request for checking correctness of a multi-fabric design across at least a first design fabric and a second design fabric. A request for action is transmitted from a first EDA tool session to a second EDA tool session. Connectivity information of second design data in the second design fabric is identified by the second EDA tool session in response to the request for action from the first EDA tool session. These various techniques then check the correctness of the multi-fabric design in the first design fabric by using at least the connectivity information of the second design data. A symbolic representation may be used to represent design data in an EDA tool session to which the design data are not native.

Patent
23 Dec 2014
TL;DR: In this paper, the authors proposed a system and method to ensure reliable high speed data transfer in multiple data rate nonvolatile memory, such as double data rate (DDR) NAND flash memory and the like.
Abstract: The subject system and method are generally directed to ensuring reliable high speed data transfer in multiple data rate nonvolatile memory, such as double data rate (DDR) nonvolatile NAND flash memory and the like. The system and method provide measures to achieve read and write training for data signals (DQ) and the data strobe signal (DQS), one relative to the other. In such manner, high speed data transfers to and from nonvolatile memory such as flash devices may be performed with a reduced risk of data loss even at high operational frequencies.

01 Jan 2014
TL;DR: A new bytecode set is implemented, which includes additional bytecodes that allow the Just-in-time compiler to generate less generic, and hence simpler and faster code sequences for frequently executed primitives.
Abstract: The Cog virtual machine features a bytecode interpreter and a baseline Just-in-time compiler. To reach the performance level of industrial quality virtual machines such as Java HotSpot, it needs to employ an adaptive inlining com-piler, a tool that on the fly aggressively optimizes frequently executed portions of code. We decided to implement such a tool as a bytecode to bytecode optimizer, implemented above the virtual machine, where it can be written and developed in Smalltalk. The optimizer we plan needs to extend the operations encoded in the bytecode set and its quality heavily depends on the bytecode set quality. The current bytecode set understood by the virtual machine is old and lacks any room to add new operations. We decided to implement a new bytecode set, which includes additional bytecodes that allow the Just-in-time compiler to generate less generic, and hence simpler and faster code sequences for frequently executed primitives. The new bytecode set includes traps for validating speculative inlining de-cisions and is extensible without compromising optimization opportunities. In addition, we took advantage of this work to solve limitations of the current bytecode set such as the maximum number of instance variable per class, or number of literals per method. In this paper we describe this new byte-code set. We plan to have it in production in the Cog virtual machine and its Pharo, Squeak and Newspeak clients in the coming year.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: A hierarchical and core-based architecture for generating tests for cores and migrating them to the chip that allows testing multiple instances of the same core for the same cost as testing a single instance.
Abstract: As chip design sizes continue to increase and they contain multiple instances of large and small cores, there is a need for a chip test architecture that allows efficient chip-level tests to be created while also reducing the memory and CPU time needed to create the tests. We define a hierarchical and core-based architecture for generating tests for cores and migrating them to the chip. This architecture allows testing multiple instances of the same core for the same cost as testing a single instance. The architecture also allows testing multiple instances of different cores as well. Memory use is kept low by generating tests for cores out of context and migrating them to the chip. We never have to build a full gate-level chip ATPG model. We show results of pattern count reduction possible when targeting multiple cores simultaneously.

Patent
28 Jan 2014
TL;DR: In this paper, a system and method for adaptive self-calibration to remove sample timing error in time-interleaved ADC of an analog signal is presented, where a plurality of ADC channels recursively sample the analog signal within a series of sample segments according to a predetermined sampling clock.
Abstract: A system and method are provided for adaptive self-calibration to remove sample timing error in time-interleaved ADC of an analog signal. A plurality of ADC channels recursively sample the analog signal within a series of sample segments according to a predetermined sampling clock to generate a time-interleaved series of output samples. A timing skew detection unit is coupled to the ADC channels, which generates for each sample segment a timing skew factor indicative of sampling clock misalignment within the sample segment. Each timing skew factor is generated based adaptively on the output samples for a selective combination of segments including at least one preceding and at least one succeeding sample segment. A plurality of timing control units respectively coupled to the ADC channels adjust time delays for the sampling clock within respective sample segments responsive to the timing skew factors, thereby substantially aligning the sample segments with the sampling clock.

Patent
29 Oct 2014
TL;DR: In this paper, a method for debugging a program that includes declarative code and procedural code is presented to a user on an output device data relating to execution of the procedural code.
Abstract: A method for debugging a program that includes declarative code and procedural code includes presenting to a user on an output device data relating to execution of the procedural code and data relating to execution of the declarative code. The data is presented in the form of a sequence of execution events corresponding to a computational flow of an execution of the program.

Journal ArticleDOI
TL;DR: Adding ATO to RT and TMZ is feasible and tolerable but does not appear to improve outcome compared to RTOG 0525 data where OS is 16.6 months in newly diagnosed malignant gliomas.
Abstract: 2072 Background: Current standard treatment for GBM is radiation (RT) and temozolomide (TMZ). We published phase I data of the addition of arsenic trioxide (ATO) to RT and TMZ. We now present the p...