scispace - formally typeset
Search or ask a question

Showing papers by "Eindhoven University of Technology published in 2011"


Journal ArticleDOI
TL;DR: Preoperative short-term radiotherapy significantly improved 10-year survival in patients with a negative circumferential margin and TNM stage III, and future staging techniques should offer possibilities to select patient groups for which the balance between benefits and side-effects will result in sufficiently large gains.
Abstract: Summary Background The TME trial investigated the value of preoperative short-term radiotherapy in combination with total mesorectal excision (TME). Long-term results are reported after a median follow-up of 12 years. Methods Between Jan 12, 1996, and Dec 31, 1999, 1861 patients with resectable rectal cancer without evidence of distant disease were randomly assigned to TME preceded by 5 × 5 Gy radiotherapy or TME alone (ratio 1:1). Randomisation was based on permuted blocks of six with stratification according to centre and expected type of surgery. The primary endpoint was local recurrence, analysed for all eligible patients who underwent a macroscopically complete local resection. Findings 10-year cumulative incidence of local recurrence was 5% in the group assigned to radiotherapy and surgery and 11% in the surgery-alone group (p Interpretation For all eligible patients, preoperative short-term radiotherapy reduced 10-year local recurrence by more than 50% relative to surgery alone without an overall survival benefit. For patients with a negative resection margin, the effect of radiotherapy was irrespective of the distance from the anal verge and led to an improved cancer-specific survival, which was nullified by an increase in other causes of death, resulting in an equal overall survival. Nevertheless, preoperative short-term radiotherapy significantly improved 10-year survival in patients with a negative circumferential margin and TNM stage III. Future staging techniques should offer possibilities to select patient groups for which the balance between benefits and side-effects will result in sufficiently large gains. Funding The Dutch Cancer Society, the Dutch National Health Council, and the Swedish Cancer Society.

1,449 citations


Journal ArticleDOI
TL;DR: The paper ends with an exploration of the way in which these core design practices can be adopted for organisational problem solving and innovation.

1,227 citations


Journal ArticleDOI
TL;DR: An overview of the research on biomass upgrading by torrefaction for the production of biofuels is presented in this article, where the removal of oxygen from biomass aims to produce a fuel with increased energy density by decomposing the reactive hemicellulose fraction.
Abstract: An overview of the research on biomass upgrading by torrefaction for the production of biofuels is presented. Torrefaction is a thermal conversion method of biomass in the low temperature range of 200–300 °C. Biomass is pre-treated to produce a high quality solid biofuel that can be used for combustion and gasification. In this review the characteristics of torrefaction are described and a short history of torrefaction is given. Torrefaction is based on the removal of oxygen from biomass which aims to produce a fuel with increased energy density by decomposing the reactive hemicellulose fraction. Different reaction conditions (temperature, inert gas, reaction time) and biomass resources lead to various solid, liquid and gaseous products. A short overview of the different mass and energy balances is presented. Finally, the technology options and the most promising torrefaction applications and their economic potential are described.

1,125 citations


Proceedings ArticleDOI
21 Oct 2011
TL;DR: A proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem, and a number of application-specific optimizations to the encryption scheme, including the ability to convert between different message encodings in a ciphertext.
Abstract: The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted.In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data. In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but it is hard to ignore the elephant in the room, namely efficiency -- can homomorphic encryption ever be efficient enough to be practical? Certainly, it seems that all known fully homomorphic encryption schemes have a long way to go before they can be used in practice. Given this state of affairs, our contribution is two-fold.First, we exhibit a number of real-world applications, in the medical, financial, and the advertising domains, which require only that the encryption scheme is "somewhat" homomorphic. Somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations, can be much faster, and more compact than fully homomorphic encryption schemes.Secondly, we show a proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem. The scheme is very efficient, and has reasonably short ciphertexts. Our unoptimized implementation in magma enjoys comparable efficiency to even optimized pairing-based schemes with the same level of security and homomorphic capacity. We also show a number of application-specific optimizations to the encryption scheme, most notably the ability to convert between different message encodings in a ciphertext.

1,053 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the state of the art of job demands and resources (JD-R) model, which incorporates many possible working conditions and focuses on both negative and positive indicators of employee well-being.
Abstract: Motivation: The motivation of this overview is to present the state of the art of Job Demands–Resources (JD–R) model whilst integrating the various contributions to the special issue. Research purpose: To provide an overview of the JD–R model, which incorporates many possible working conditions and focuses on both negative and positive indicators of employee well-being. Moreover, the studies of the special issue were introduced. Research design: Qualitative and quantitative studies on the JD–R model were reviewed to enlighten the health and motivational processes suggested by the model. Main findings: Next to the confirmation of the two suggested processes of the JD–R model, the studies of the special issue showed that the model can be used to predict work-place bullying, incidences of upper respiratory track infection, work-based identity, and early retirement intentions. Moreover, whilst psychological safety climate could be considered as a hypothetical precursor of job demands and resources, compassion satisfaction moderated the health process of the model. Contribution/value-add: The findings of previous studies and the studies of the special issue were integrated in the JD–R model that can be used to predict well-being and performance at work. New avenues for future research were suggested. Practical/managerial implications: The JD–R model is a framework that can be used for organisations to improve employee health and motivation, whilst simultaneously improving various organisational outcomes.

915 citations


Journal ArticleDOI
TL;DR: In this paper, disulfide chemistry is used to introduce a self-healing ability in a covalently cross-linked rubber, which leads to full recovery of mechanical properties.
Abstract: Autonomous healing of damage is a common phenomenon in living organisms but is hardly ever encountered in synthetic materials. Disulfide chemistry is used to introduce a self-healing ability in a covalently cross-linked rubber. Autonomous healing of a cut takes place at moderate temperatures and leads to full recovery of mechanical properties. This result is achieved by introducing disulfide groups in the network that are able to exchange, leading to renewal of cross-links across the damaged surfaces. The healing process can be repeated many times. The combination of their unique self-healing properties and applicability for a large variety of polymers makes this approach ideal for coatings.

720 citations


Journal ArticleDOI
TL;DR: In this article, an overview of the various electrode and electrolyte materials that are suitable for 3D planar all-solid-state Li-ion microbatteries is given.
Abstract: With the increasing importance of wireless microelectronic devices the need for on-board power supplies is evidently also increasing. Possible candidates for microenergy storage devices are planar all-solid-state Li-ion microbatteries, which are currently under development by several start-up companies. However, to increase the energy density of these microbatteries further and to ensure a high power delivery, three-dimensional (3D) designs are essential. Therefore, several concepts have been proposed for the design of 3D microbatteries and these are reviewed. In addition, an overview is given of the various electrode and electrolyte materials that are suitable for 3D all-solid-state microbatteries. Furthermore, methods are presented to produce films of these materials on a nano- and microscale.

672 citations


Journal ArticleDOI
27 Oct 2011-Nature
TL;DR: It is demonstrated that chemical ozone destruction over the Arctic in early 2011 was—for the first time in the observational record—comparable to that in the Antarctic ozone hole.
Abstract: Chemical ozone destruction occurs over both polar regions in local winter–spring. In the Antarctic, essentially complete removal of lower-stratospheric ozone currently results in an ozone hole every year, whereas in the Arctic, ozone loss is highly variable and has until now been much more limited. Here we demonstrate that chemical ozone destruction over the Arctic in early 2011 was—for the first time in the observational record—comparable to that in the Antarctic ozone hole. Unusually long-lasting cold conditions in the Arctic lower stratosphere led to persistent enhancement in ozone-destroying forms of chlorine and to unprecedented ozone loss, which exceeded 80 per cent over 18–20 kilometres altitude. Our results show that Arctic ozone holes are possible even with temperatures much milder than those in the Antarctic. We cannot at present predict when such severe Arctic ozone depletion may be matched or exceeded. Since its emergence in the 1980s, the Antarctic ozone hole, the near-complete loss of lower-stratospheric ozone, has occurred every year. The possibility that a similar effect might occur in the Northern Hemisphere has been debated, but despite considerable variation in ozone levels in the Arctic, they had not reached the extremes seen in the south. Until this year. Observations made in the late winter and early spring of 2011 reveal ozone loss far outside the range previously observed over the Northern Hemisphere, comparable to some Antarctic ozone holes. The formation of the hole was driven by an unusually long cold snap and a high level of ozone-destroying chlorine. Although this effect is dramatic, it is difficult to predict whether similar Arctic ozone holes will develop in future.

597 citations


Journal ArticleDOI
TL;DR: It is shown that the extraction operator and Bézier elements provide an element structure for isogeometric analysis that can be easily incorporated into existing finite element codes, without any changes to element form and assembly algorithms, and standard data processing arrays.
Abstract: We develop finite element data structures for T-splines based on Bezier extraction generalizing our previous work for NURBS. As in traditional finite element analysis, the extracted Bezier elements are defined in terms of a fixed set of polynomial basis functions, the so-called Bernstein basis. The Bezier elements may be processed in the same way as in a standard finite element computer program, utilizing exactly the same data processing arrays. In fact, only the shape function subroutine needs to be modified while all other aspects of a finite element program remain the same. A byproduct of the extraction process is the element extraction operator. This operator localizes the topological and global smoothness information to the element level, and represents a canonical treatment of T-junctions, referred to as ‘hanging nodes’ in finite element analysis and a fundamental feature of T-splines. A detailed example is presented to illustrate the ideas. Copyright © 2011 John Wiley & Sons, Ltd.

566 citations


Journal ArticleDOI
TL;DR: Three similarity metrics that can be used to answer queries on process repositories are presented, including node matching similarity that compares the labels and attributes attached to process model elements; structural similarity that connects element labels as well as causal relations captured in the process model.

561 citations


Journal ArticleDOI
TL;DR: In this paper, an improved version of the DOMINO algorithm, DOMINO v2.0, was proposed for OMI based on better air mass factors (AMFs) and a correction for across-track stripes resulting from calibration errors in the OMI backscattered reflectances.
Abstract: . We present an improved tropospheric nitrogen dioxide column retrieval algorithm (DOMINO v2.0) for OMI based on better air mass factors (AMFs) and a correction for across-track stripes resulting from calibration errors in the OMI backscattered reflectances. Since October 2004, NO2 retrievals from the Ozone Monitoring Instrument (OMI), a UV/Vis nadir spectrometer onboard NASA's EOS-Aura satellite, have been used with success in several scientific studies focusing on air quality monitoring, detection of trends, and NOx emission estimates. Dedicated evaluations of previous DOMINO tropospheric NO2 retrievals indicated their good quality, but also suggested that the tropospheric columns were susceptible to high biases (by 0–40%), probably because of errors in the air mass factor calculations. Here we update the DOMINO air mass factor approach. We calculate a new look-up table (LUT) for altitude-dependent AMFs based on more realistic atmospheric profile parameters, and include more surface albedo and surface pressure reference points than before. We improve the sampling of the TM4 model, resulting in a priori NO2 profiles that are better mixed throughout the boundary layer. We evaluate the NO2 profiles simulated with the improved TM4 sampling as used in the AMF calculations and show that they are highly consistent with in situ NO2 measurements from aircraft during the INTEX-A and INTEX-B campaigns in 2004 and 2006. Our air mass factor calculations are further updated by the implementation of a high-resolution terrain height and a high-resolution surface albedo climatology based on OMI measurements. Together with a correction for across-track stripes, the overall impact of the improved terrain height and albedo descriptions is modest (

Proceedings ArticleDOI
18 Nov 2011
TL;DR: Experiments clearly show that the practical results match the theoretical analysis, thereby indicating the possibilities for short-distance vehicle following, and validate the technical feasibility of the resulting control system.
Abstract: Road throughput can be increased by driving at small inter-vehicle time gaps. The amplification of velocity disturbances in upstream direction, however, poses limitations to the minimum feasible time gap. String-stable behavior is thus considered an essential requirement for the design of automatic distance control systems, which are needed to allow for safe driving at time gaps well below 1 s. Theoretical analysis reveals that this requirement can be met using wireless inter-vehicle communication to provide real-time information of the preceding vehicle, in addition to the information obtained by common Adaptive Cruise Control (ACC) sensors. In order to validate these theoretical results and to demonstrate the technical feasibility, the resulting control system, known as Cooperative ACC (CACC), is implemented on a test fleet consisting of six passenger vehicles. Experiments clearly show that the practical results match the theoretical analysis, thereby indicating the possibilities for short-distance vehicle following.

Journal ArticleDOI
TL;DR: This State‐of‐the‐Art Report surveys available techniques for the visual analysis of large graphs and discusses various graph algorithmic aspects useful for the different stages of the visual graph analysis process.
Abstract: The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques cover techniques that had been introduced until 2000 or concentrate only on graph layouts published until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as timevarying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review first considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process. We also present main open research challenges in this field.

01 Jan 2011
TL;DR: In this paper, a methodology for analyzing the collective learning of the classroom community in terms of the evolution of classroom mathematical practices is described, and a sample analysis taken from a 1st-grade classroom teaching experiment is presented to illustrate how we coordinate a social perspective on communal practices with a psychological perspective on individual students' diverse ways of reasoning as they participate in those practices.
Abstract: In this article, we describe a methodology for analyzing the collective learning of the classroom community in terms of the evolution of classroom mathematical practices. To develop the rationale for this approach, we first ground the discussion in our work as mathematics educators who conduct classroom-based design research. We then present a sample analysis taken from a 1st-grade classroom teaching experiment that focused on linear measurement to illustrate how we coordinate a social perspective on communal practices with a psychological perspective on individual students’ diverse ways of reasoning as they participate in those practices. In the concluding sections of the article, we frame the sample analysis as a paradigm case in which to clarify aspects of the methodology and consider its usefulness for design research.

Journal ArticleDOI
TL;DR: This paper demonstrates that the discovered process models can be extended with information to predict the completion time of running instances, using a configurable approach to construct a process model, augment this model with time information learned from earlier instances, and use this to predict e.g., the completionTime.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: A new process representation language is presented in combination with an accompanying process mining algorithm that results in easy to understand process models even in the case of non-trivial constructs, low structured domains and the presence of noise.
Abstract: One of the aims of process mining is to retrieve a process model from a given event log. However, current techniques have problems when mining processes that contain nontrivial constructs, processes that are low structured and/or dealing with the presence of noise in the event logs. To overcome these problems, a new process representation language is presented in combination with an accompanying process mining algorithm. The most significant property of the new representation language is in the way the semantics of splits and joins are represented; by using so-called split/join frequency tables. This results in easy to understand process models even in the case of non-trivial constructs, low structured domains and the presence of noise. This paper explains the new process representation language and how the mining algorithm works. The algorithm is implemented as a plug-in in the ProM framework. An illustrative example with noise and a real life log of a complex and low structured process are used to explicate the presented approach.

Journal ArticleDOI
TL;DR: This paper provides a literature review which categorizes the research to date on lateral transshipments, so that differences can be understood and gaps within the literature can be identified.

Journal ArticleDOI
TL;DR: In this article, a case study is performed based on an office building with respect to various building performance parameters and implications for the results considering energy consumption and thermal comfort are demonstrated and elaborated.

Book
12 May 2011
TL;DR: In this paper, the Von Guericke force was used to measure the depletion effects in colloid-polymer mixtures, and the free volume theory was applied to these mixtures.
Abstract: Preface 1 Introduction 1.1 Colloidal interactions 1.2 The Von Guericke force 1.3 Depletion 1.4 Manifestations of depletion effects in colloid + polymer mixtures 1.5 Historical overview on depletion 1.5.1 Experimental observations before the 1950s 1.5.2 Attractive forces in Nagoya 1.5.3 Systematic studies after AO 1.6 Outline of this book 2 Depletion Interaction 2.1 Depletion interaction due to penetrable spheres 2.1.1 Depletion interaction between two flat plates 2.1.2 Depletion interaction between two spheres 2.2 Depletion interaction due to ideal polymers 2.2.1 Depletion interaction between two flat plates 2.2.2 Interaction between two spheres 2.3 Depletion interaction due to excluded volume polymers 2.3.1 Characteristic length scales in polymer solutions 2.3.2 Osmotic pressure of polymer solutions 2.3.3 Depletion thickness due to excluded volume polymers 2.3.4 Evaluation of the depletion interaction due to excluded volume polymers 2.4 Depletion interaction due to spheres 2.4.1 Concentration profiles near a hard wall and between two hard walls 2.4.2 Depletion interaction between two flat plates 2.4.3 Depletion interaction between two (big) spheres 2.5 Depletion interaction due to rods 2.5.1 Depletion interaction between two flat plates 2.5.2 Interaction between two (big) colloidal spheres using the Derjaguin approximation 2.6 Depletion interaction due to disks 2.6.1 Depletion interaction between two flat plates 2 .6.2 Interaction bet ween two (big) colloidal spheres using the Derjaguin approximation 2.7 Measurements of depletion interactions 2.7.1 Atomic force microscope 2.7.2 Total internal eflection microscopy 2.7.3 Optical tweezers 3 Phase transitions of hard spheres plus depletants basics 3.1 Introduction -colloid/atom analogy 3.2 The hard sphere fluid-crystal transition 3.3 Free volume theory Appendix 3.1. Statistical Mechanical derivation of the Free Volume Theory 4 Stability of colloid-polymer mixtures 4.1 Experimental state diagrams of colloid-polymer mixtures 4.2 Phase behaviour of colloid + ideal polymer mixtures 4.3 Phase behaviour of sphere plus interacting polymer mixtures GFVT 4.3.1 Depletion thickness and osmotic pressure 4.3.2 Protein Limit 4.4 Non-equilibrium phenomena 4.4.1 Unmixing kinetics 4.4.2 Aggregation and gelation 4.4.3 Depletion effects on colloidal glasses 5 Phase transitions of hard spheres plus colloids 5.1 Free volume theory for big plus small hard spheres 5.2 Phase behavior of mixed spheres 5.2.1 Phase separation in binary mixtures differing only in diameter 5.2.2 Mixtures of latex particles and micelles 5.2.3 Oil-in-water emulsion particles and micelles of the stabilizing surfactant 5.3 free volume theory for sphere-rod mixtures 5.4 Phase behaviour of sphere-rod mixtures 6 Suspensions of rod-like colloids plus polymers 6.1 Onsager theory of the isotropic-nematic transition 6.2 Scaled particle theory of the isotropic-nematic transition 6.3 Isotropic-nematic phase behaviour of rods plus phs 6.4 I-N phase behaviour of rods plus polymers 6.4.1 Rod-like colloids plus ideal polymers 6.4.2 Rod-like colloids plus interacting polymers 6.5 Experiments on rod/polymer mixtures 6.5.1 Stiff virusparticles + polymer 6.5.2 Cellulose nanocrystals + polymer 6.5.3 Sterically stabilized colloidal boehmite rods + polymer 6.6 Rod/polymer mixtures: full phase diagrams. 6.7 Concluding remarks. List of symbols Index

Journal ArticleDOI
01 Jan 2011
TL;DR: It is demonstrated that in compact distribution grids voltage control problems and blinding of protection are not likely to occur and that false tripping and fault level have to be considered carefully.
Abstract: In today's distribution grids the number of distributed generation (DG) units is increasing rapidly. Combined heat and power (CHP) plants and wind turbines are most often installed. Integration of these DG units into the distribution grid leads to planning as well as operational challenges. Based on the experience of a Dutch distribution system operators (DSO), this paper addresses several possibilities to handle grid planning issues. Effects on voltage control, grid protection, and fault levels are investigated and described. These aspects are illustrated with the aid of simulations on an existing distribution grid. It is demonstrated that in compact distribution grids voltage control problems and blinding of protection are not likely to occur and that false tripping and fault level have to be considered carefully.

Journal ArticleDOI
TL;DR: Following last decade’s programmatic papers on Evolutionary Economic Geography, a report on recent empirical advances and how this empirical work can be positioned vis-a-vis other strands of research in economic geography.
Abstract: Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and how the evolutionary perspective relates to that of New Economic Geography. Second, we discuss research on agglomeration externalities in Regional Science, and how Evolutionary Economic Geography contributed to this literature with the concepts of cognitive proximity and related variety. Third, we go into the role of institutions in Evolutionary Economic Geography, and we relate this to the way Institutional Economic Geography tends to view institutions. From this discussion, a number of new research challenges are derived.

Journal ArticleDOI
TL;DR: The huge advances in the past decade towards observing, controllably creating and manipulating single dopants, as well as their application in novel devices which allow opening the new field of solotronics (solitary dopant optoelectronics).
Abstract: The sensitive dependence of a semiconductor's electronic, optical and magnetic properties on dopants has provided an extensive range of tunable phenomena to explore and apply to devices. Recently it has become possible to move past the tunable properties of an ensemble of dopants to identify the effects of a solitary dopant on commercial device performance as well as locally on the fundamental properties of a semiconductor. New applications that require the discrete character of a single dopant, such as single-spin devices in the area of quantum information or single-dopant transistors, demand a further focus on the properties of a specific dopant. This article describes the huge advances in the past decade towards observing, controllably creating and manipulating single dopants, as well as their application in novel devices which allow opening the new field of solotronics (solitary dopant optoelectronics).

Journal ArticleDOI
TL;DR: The synthesis, physical properties, and catalytic activity of a water-soluble segmented terpolymer in which a helical structure in the apolar core is created around a ruthenium-based catalyst is reported.
Abstract: Enzymes are a source of inspiration for chemists attempting to create versatile synthetic catalysts. In order to arrive at a polymeric chain carrying catalytic units separated spatially, it is a prerequisite to fold these polymers in water into well-defined compartmentalized architectures thus creating a catalytic core. Herein, we report the synthesis, physical properties, and catalytic activity of a water-soluble segmented terpolymer in which a helical structure in the apolar core is created around a ruthenium-based catalyst. The supramolecular chirality of this catalytic system is the result of the self-assembly of benzene-1,3,5-tricarboxamide side chains, while the catalyst arises from the sequential ruthenium-catalyzed living radical polymerization of the different monomers followed by ligand exchange. The polymers exhibit a two-state folding process and show transfer hydrogenation in water.

Journal ArticleDOI
TL;DR: In this article, the state-of-the-art summary of the recent developments in the design of discrete systems and functional materials is presented, with a focus on H-bonded supramolecular assemblies of π-conjugated systems.
Abstract: Recent developments in the area of H-bonded supramolecular assemblies of π-conjugated systems, that is, oligomers and polymers, are described. The state-of-the-art summary of the recent developments in the design of discrete systems and functional materials is presented.

Journal ArticleDOI
TL;DR: The study suggests that the proposed UX Curve method can be used as a straightforward tool for understanding the reasons why user experience improves or worsens in long-term product use and how these reasons relate to customer loyalty.

Journal ArticleDOI
23 Sep 2011-Science
TL;DR: This work presents a method for the simultaneous determination of megacity NOx emissions and lifetimes from satellite measurements by analyzing the downwind patterns of NO2 separately for different wind conditions.
Abstract: Megacities are immense sources of air pollutants, with large impacts on air quality and climate. However, emission inventories in many of them still are highly uncertain, particularly in developing countries. Satellite observations allow top-down estimates of emissions to be made for nitrogen oxides (NO(x) = NO + NO(2)), but require poorly quantified a priori information on the NO(x) lifetime. We present a method for the simultaneous determination of megacity NO(x) emissions and lifetimes from satellite measurements by analyzing the downwind patterns of NO(2) separately for different wind conditions. Daytime lifetimes are ~4 hours at low and mid-latitudes, but ~8 hours in wintertime for Moscow. The derived NO(x) emissions are generally in good agreement with existing emission inventories, but are higher by a factor of 3 for the Saudi Arabian capital Riyadh.

Journal ArticleDOI
TL;DR: Nanoprecipitation is a facile, mild, and low energy input process for the preparation of polymeric nanoparticles as discussed by the authors, which can be applied to a variety of applications such as microfluidics, pipetting robots, inkjet printers, and automated analytical instrumentation.
Abstract: Nanoprecipitation is a facile, mild, and low energy input process for the preparation of polymeric nanoparticles. Basic requirements, as well as common techniques for the self-assembly of non-charged and non-amphiphilic macromolecules into defined nanoparticles are described. At present, the primary focus of polymer nanoprecipitation research lays on poly(lactic acid) (PLA) and its copolymer poly(lactic-co-glycolic acid) (PLGA). This contribution thus emphasises on polymers beyond PLA systems, such as common industrial- or tailored lab-made polymers, and their ability to form well-defined, functional nanoparticles for a variety of applications now and in the past two centuries. Moreover, in combination with high-throughput devices such as microfluidics, pipetting robots, inkjet printers, and automated analytical instrumentation, the abilities of nanoprecipitation may broaden tremendously with significant effects on new applications.

Journal ArticleDOI
TL;DR: It is shown that the eight soundness notions described in the literature are decidable for workflow nets, however, most extensions will make all of these notions undecidable.
Abstract: Workflow nets, a particular class of Petri nets, have become one of the standard ways to model and analyze workflows. Typically, they are used as an abstraction of the workflow that is used to check the so-called soundness property. This property guarantees the absence of livelocks, deadlocks, and other anomalies that can be detected without domain knowledge. Several authors have proposed alternative notions of soundness and have suggested to use more expressive languages, e.g., models with cancellations or priorities. This paper provides an overview of the different notions of soundness and investigates these in the presence of different extensions of workflow nets. We will show that the eight soundness notions described in the literature are decidable for workflow nets. However, most extensions will make all of these notions undecidable. These new results show the theoretical limits of workflow verification. Moreover, we discuss some of the analysis approaches described in the literature.

Journal ArticleDOI
TL;DR: In this paper, the performance of two different modeling approaches (RANS standard k-e and LES) applied to pollutant dispersion in an actual urban environment: downtown Montreal is evaluated.

Journal ArticleDOI
TL;DR: The beating heart creates blood pressure and flow pulsations that propagate as waves through the arterial tree that are reflected at transitions in arterial geometry and elasticity, and these waves carry information about the matter in which they propagate.
Abstract: The beating heart creates blood pressure and flow pulsations that propagate as waves through the arterial tree that are reflected at transitions in arterial geometry and elasticity. Waves carry information about the matter in which they propagate. Therefore, modeling of arterial wave propagation extends our knowledge about the functioning of the cardiovascular system and provides a means to diagnose disorders and predict the outcome of medical interventions. In this review we focus on the physical and mathematical modeling of pulse wave propagation, based on general fluid dynamical principles. In addition we present potential applications in cardiovascular research and clinical practice. Models of short- and long-term adaptation of the arterial system and methods that deal with uncertainties in personalized model parameters and boundary conditions are briefly discussed, as they are believed to be major topics for further study and will boost the significance of arterial pulse wave modeling even more.