scispace - formally typeset
Search or ask a question

Showing papers by "Grenoble Institute of Technology published in 2015"


Journal ArticleDOI
TL;DR: A comprehensive review of the literature regarding nanocellulose isolation and demonstrates the potential of cellulose nanomaterials for a wide range of high-tech applications is presented in this article.
Abstract: The main goal of this article is to provide an overview of recent research in the area of cellulose nanomaterial production from different sources. Due to their abundance, renewability, high strength and stiffness, eco-friendliness and low weight, numerous studies have been reported on the isolation of cellulose nanomaterials from different cellulosic sources and their use in high-performance applications. This report covers an introduction to the definition of nanocellulose as well as the methods used for isolation of nanomaterials (including nanocrystals and nanofibers, CNCs and CNFs, respectively) from various sources. The web-like network structure (CNFs) can be extracted from natural sources using mechanical processes, which include high-pressure homogenization, grinding and refining treatments. Also, rod-like CNCs can be isolated from sources such as wood, plant fibers, agricultural and industrial bioresidues, tunicates and bacterial cellulose using an acid hydrolysis process. Following this, the article focuses on the characterization methods, material properties and structures. Encyclopedic characteristics of CNFs and CNCs obtained from different source materials and/or studies are also included. The current report is a comprehensive review of the literature regarding nanocellulose isolation and demonstrates the potential of cellulose nanomaterials for a wide range of high-tech applications.

624 citations


Journal ArticleDOI
TL;DR: In this article, the state-of-the-art multispectral pansharpening techniques for hyperspectral data were compared with some of the state of the art methods for multi-spectral panchambering.
Abstract: Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literatures for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state-of-the-art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.

620 citations


Journal ArticleDOI
TL;DR: The split augmented Lagrangian shrinkage algorithm (SALSA), which is an instance of the alternating direction method of multipliers (ADMM), is added to this optimization problem, by means of a convenient variable splitting, and an effective algorithm is obtained that outperforms the state of the art.
Abstract: Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images that combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector total variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the nonquadratic and nonsmooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally “live” in a low-dimensional subspace and by tailoring the split augmented Lagrangian shrinkage algorithm (SALSA), which is an instance of the alternating direction method of multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state of the art, as illustrated in a series of experiments with simulated and real-life data.

453 citations


Journal ArticleDOI
TL;DR: The main objective of this survey paper is to recall the concept of the APs along with all its modifications and generalizations with special emphasis on remote sensing image classification and summarize the important aspects of its efficient utilization while also listing potential future works.
Abstract: Just over a decade has passed since the concept of morphological profile was defined for the analysis of remote sensing images. Since then, the morphological profile has largely proved to be a powerful tool able to model spatial information (e.g., contextual relations) of the image. However, due to the shortcomings of using the morphological profiles, many variants, extensions, and refinements of its definition have appeared stating that the morphological profile is still under continuous development. In this case, recently introduced theoretically sound attribute profiles (APs) can be considered as a generalization of the morphological profile, which is a powerful tool to model spatial information existing in the scene. Although the concept of the AP has been introduced in remote sensing only recently, an extensive literature on its use in different applications and on different types of data has appeared. To that end, the great amount of contributions in the literature that address the application of the AP to many tasks (e.g., classification, object detection, segmentation, change detection, etc.) and to different types of images (e.g., panchromatic, multispectral, and hyperspectral) proves how the AP is an effective and modern tool. The main objective of this survey paper is to recall the concept of the APs along with all its modifications and generalizations with special emphasis on remote sensing image classification and summarize the important aspects of its efficient utilization while also listing potential future works.

342 citations


Journal ArticleDOI
TL;DR: ABortable STate mAChine replicaTion is presented, a new abstraction for designing and reconfiguring generalized replicated state machines that are, unlike traditional state machines, allowed to abort executing a client’s request if “something goes wrong".
Abstract: We present Abstract (ABortable STate mAChine replicaTion), a new abstraction for designing and reconfiguring generalized replicated state machines that are, unlike traditional state machines, allowed to abort executing a client’s request if “something goes wrong.” Abstract can be used to considerably simplify the incremental development of efficient Byzantine fault-tolerant state machine replication (BFT) protocols that are notorious for being difficult to develop. In short, we treat a BFT protocol as a composition of Abstract instances. Each instance is developed and analyzed independently and optimized for specific system conditions. We illustrate the power of Abstract through several interesting examples. We first show how Abstract can yield benefits of a state-of-the-art BFT protocol in a less painful and error-prone manner. Namely, we develop AZyzzyva, a new protocol that mimics the celebrated best-case behavior of Zyzzyva using less than 35p of the Zyzzyva code. To cover worst-case situations, our abstraction enables one to use in AZyzzyva any existing BFT protocol. We then present Aliph, a new BFT protocol that outperforms previous BFT protocols in terms of both latency (by up to 360p) and throughput (by up to 30p). Finally, we present R-Aliph, an implementation of Aliph that is robust, that is, whose performance degrades gracefully in the presence of Byzantine replicas and Byzantine clients.

262 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the reliability of a selection of potential components or materials used in the package assembly as the substrates, the die attaches, the interconnections, and the encapsulation materials.
Abstract: In order to take the full advantage of the high-temperature SiC and GaN operating devices, package materials able to withstand high-temperature storage and large thermal cycles have been investigated. The temperature under consideration here are higher than 200 °C. Such temperatures are required for several potential applications such as down-hole oil and gas industry for well logging, aircrafts, automotive, and space exploration. This review focuses on the reliability of a selection of potential components or materials used in the package assembly as the substrates, the die attaches, the interconnections, and the encapsulation materials. It reveals that, substrates with low coefficient of thermal expansion (CTE) conductors or with higher fracture resistant ceramics are potential candidates for high temperatures. Die attaches and interconnections reliable solutions are also available with the use of compatible metallization schemes. At this level, the reliability can also be improved by reducing the CTE mismatch between assembled materials. The encapsulation remains the most limiting packaging component since hard materials present thermomechanical reliability issues, while soft materials have low degradation temperatures. The review allows identifying reliable components and materials for high-temperature wide bandgap semiconductors and is expected to be very useful for researchers working for the development on high-temperature electronics.

254 citations


Journal ArticleDOI
TL;DR: In this article, an air-filled substrate integrated waveguide (SIW) made of a multilayer printed circuit board process is proposed for millimeter-wave applications that generally require low cost and low-loss performance and excellent power-handling capability.
Abstract: An air-filled substrate integrated waveguide (SIW) made of a multilayer printed circuit board process is proposed in this paper. It is of particular interest for millimeter-wave applications that generally require low cost and low-loss performance and excellent power-handling capability. This three-layered air-filled SIW allows for substantial loss reduction and power-handling capability enhancement. The top and bottom layers may make use of a low-cost standard substrate such as FR-4 on which baseband or digital circuits can be implemented so to obtain a very compact, high-performance, low-cost, and self-packaged millimeter-wave integrated system. Over Ka-band (U-band), it is shown that the air-filled SIW compared to its dielectric-filled counterparts based on Rogers substrates RT/Duroid 5880 and also 6002 reduces losses by a mean value of 0.068 dB/cm (0.098 dB/cm) and 0.104 dB/cm (0.152 dB/cm), increases average power-handling capability by 8 dB (6 dB) and 7.5 dB (5.7 dB), and quality factor by 2.7 (2.8) and 3.6 (3.8) times, respectively. The peak power-handling capability of the proposed structure is also studied. A wideband transition is presented to facilitate interconnects of the proposed air-filled SIW with dielectric-filled SIW. Design steps of this transition are detailed and its bandwidth limitation due to fabrication tolerances is theoretically examined and established. For validation purposes, a back-to-back transition operating over the Ka-band is fabricated. It achieves a return loss of better than 15 dB and an insertion loss of ${\hbox{0.6}} \pm {\hbox{0.2 dB}}$ ( ${\hbox{0.3}} \pm {\hbox{0.1}}~{\hbox{dB}}$ for the transition) from 27 to 40 GHz. Finally, two elementary circuits, namely, the T-junction and 90 $^{\circ}$ hybrid coupler based on the air-filled SIW, are also demonstrated.

223 citations


Journal ArticleDOI
TL;DR: The antimicrobial activity of the coated papers, against Saccharomyces cervisiae, Gram-negative bacteria Escherichia coli and Gram-positive bacteria Staphylococcus aureus, was investigated and expressed in terms of reduction % of surviving number (CFU) of the tested organisms.

208 citations


Journal ArticleDOI
TL;DR: The goal of this study is to produce flexible films using neutral poly(ethylene glycol) (PEG) and to modulate their coloration using an anionic polyacrylate (PAAS) and up to 160 μmol/gCNC PAAS to tune the coloration of the CNC films.
Abstract: One property of sulfated cellulose nanocrystals (CNCs) is their ability to self-assemble from a concentrated suspension under specific drying conditions into an iridescent film. Such colored films are very brittle, which makes them difficult to handle or integrate within an industrial process. The goal of this study is (i) to produce flexible films using neutral poly(ethylene glycol) (PEG) and (ii) to modulate their coloration using an anionic polyacrylate (PAAS). The first part is dedicated to studying the physicochemical interactions of the two polymers with CNCs using techniques such as zeta potential measurements, dynamic light scattering (DLS), quartz crystal microbalance (QCM), and atomic force microscopy (AFM). Iridescent solid films were then produced and characterized using scanning electron microscopy (SEM) and UV-visible spectroscopy. The mechanical and thermal properties of films incorporating CNC were measured to evaluate improvements in flexibility. The addition of 10 wt % of PEG makes these films much more flexible (with a doubling of the elongation), with the coloration being preserved and the temperature of degradation increasing by almost 35 °C. Up to 160 μmol/gCNC PAAS can be added to tune the coloration of the CNC films by producing a more narrow, stronger coloration in the visible spectrum (higher absorption) with a well-pronounced fingerprint texture.

185 citations


Journal ArticleDOI
15 Jan 2015-Polymer
TL;DR: In this article, the effects of CNC silane surface treatment on the morphology, mechanical and thermal properties, viscoelastic behavior and water absorption of reinforced UPR have been studied.

163 citations


Journal ArticleDOI
TL;DR: This letter presents an effective implementation of sparse representation (SR) theory to the fusion of multispectral and panchromatic images, and proposes an algorithm exploiting the details self-similarity through the scales and compares it with classical and recent pansharpening methods, both at reduced and full resolution.
Abstract: The application of sparse representation (SR) theory to the fusion of multispectral (MS) and panchromatic images is giving a large impulse to this topic, which is recast as a signal reconstruction problem from a reduced number of measurements. This letter presents an effective implementation of this technique, in which the application of SR is limited to the estimation of missing details that are injected in the available MS image to enhance its spatial features. We propose an algorithm exploiting the details self-similarity through the scales and compare it with classical and recent pansharpening methods, both at reduced and full resolution. Two different data sets, acquired by the WorldView-2 and IKONOS sensors, are employed for validation, achieving remarkable results in terms of spectral and spatial quality of the fused product.

Journal ArticleDOI
01 Sep 2015
TL;DR: This Task Force paper summarizes various applications of digital RT simulation technologies in the design, analysis, and testing of power and energy systems.
Abstract: Real-time (RT) simulation is a highly reliable simulation method that is mostly based on electromagnetic transient simulation of complex systems comprising many domains. It is increasingly used in power and energy systems for both academic research and industrial applications. Due to the evolution of the computing power of RT simulators in recent years, new classes of applications and expanded fields of practice could now be addressed with RT simulation. This increase in computation power implies that models can be built more accurately and the whole simulation system gets closer to reality. This Task Force paper summarizes various applications of digital RT simulation technologies in the design, analysis, and testing of power and energy systems.

Journal ArticleDOI
TL;DR: Experimental results on both simulated and real hyperspectral data verify the effectiveness of the RS ensemble methods for the classification of both spectral and spatial information (EMAPs) and the key parameters in RS ensembles and the computational complexity are investigated.
Abstract: Classification is one of the most important techniques to the analysis of hyperspectral remote sensing images. Nonetheless, there are many challenging problems arising in this task. Two common issues are the curse of dimensionality and the spatial information modeling. In this paper, we present a new general framework to train series of effective classifiers with spatial information for classifying hyperspectral data. The proposed framework is based on the two key observations: 1) the curse of dimensionality and the high feature-to-instance ratio can be alleviated by using random subspace (RS) ensembles; and 2) the spatial–contextual information is modeled by the extended multiattribute profiles (EMAPs). Two fast learning algorithms, i.e., decision tree (DT) and extreme learning machine (ELM), are selected as the base classifiers. Six RS ensemble methods, namely, RS with DT, random forest (RF), rotation forest, rotation RF (RoRF), RS with ELM (RSELM), and rotation subspace with ELM (RoELM), are constructed by the multiple base learners. Experimental results on both simulated and real hyperspectral data verify the effectiveness of the RS ensemble methods for the classification of both spectral and spatial information (EMAPs). On the University of Pavia Reflective Optics Spectrographic Imaging System image, our proposed approaches, i.e., both RSELM and RoELM with EMAPs, achieve the state-of-the-art performances, which demonstrates the advantage of the proposed methods. The key parameters in RS ensembles and the computational complexity are also investigated in this paper.

Journal ArticleDOI
TL;DR: Experimental results reveal that rotation forest ensembles are competitive with other strong supervised classification methods, such as support vector machines, and can improve the classification accuracies significantly, confirming the importance of spatial contextual information in hyperspectral spectral-spatial classification.
Abstract: In this paper, we propose a new spectral–spatial classification strategy to enhance the classification performances obtained on hyperspectral images by integrating rotation forests and Markov random fields (MRFs) First, rotation forests are performed to obtain the class probabilities based on spectral information Rotation forests create diverse base learners using feature extraction and subset features The feature set is randomly divided into several disjoint subsets; then, feature extraction is performed separately on each subset, and a new set of linear extracted features is obtained The base learner is trained with this set An ensemble of classifiers is constructed by repeating these steps several times The weak classifier of hyperspectral data, classification and regression tree (CART), is selected as the base classifier because it is unstable, fast, and sensitive to rotations of the axes In this case, small changes in the training data of CART lead to a large change in the results, generating high diversity within the ensemble Four feature extraction methods, including principal component analysis (PCA), neighborhood preserving embedding (NPE), linear local tangent space alignment (LLTSA), and linearity preserving projection (LPP), are used in rotation forests Second, spatial contextual information, which is modeled by MRF prior, is used to refine the classification results obtained from the rotation forests by solving a maximum a posteriori problem using the $\alpha$ -expansion graph cuts optimization method Experimental results, conducted on three hyperspectral data with different resolutions and different contexts, reveal that rotation forest ensembles are competitive with other strong supervised classification methods, such as support vector machines Rotation forests with local feature extraction methods, including NPE, LLTSA, and LPP, can lead to higher classification accuracies than that achieved by PCA With the help of MRF, the proposed algorithms can improve the classification accuracies significantly, confirming the importance of spatial contextual information in hyperspectral spectral–spatial classification

Journal ArticleDOI
TL;DR: In this article, a fast and fully automatic procedure for collecting electron diffraction tomography data is presented, in which the missing wedge of the reciprocal space between the patterns is recorded by longer exposures during the crystal tilt, and automatic data collection of limited tilt range can be used to determine the unitcell parameters, while data of larger tilt range are suitable to solve the crystal structure ab initio with direct methods.
Abstract: A fast and fully automatic procedure for collecting electron diffraction tomography data is presented. In the case of a very stable goniometer it is demonstrated how, by variation of the tilting speed and the CCD detector parameters, it is possible to obtain fully automatic precession-assisted electron diffraction tomography data collections, rotation electron diffraction tomography data collections or new integrated electron diffraction tomography data collections, in which the missing wedge of the reciprocal space between the patterns is recorded by longer exposures during the crystal tilt. It is shown how automatic data collection of limited tilt range can be used to determine the unit-cell parameters, while data of larger tilt range are suitable to solve the crystal structure ab initio with direct methods. The crystal structure of monoclinic MgMoO4 has been solved in this way as a test structure. In the case where the goniometer is not stable enough to guarantee a steady position of the crystal over large tilt ranges, an automatic method for tracking the crystal during continuous rotation of the sample is proposed.

Proceedings ArticleDOI
16 Jun 2015
TL;DR: In this paper, the authors propose a 3D VLSI with a CoolCube integration to vertically stack several layers of devices with a unique connecting via density above a million/mm2.
Abstract: 3D VLSI with a CoolCube™ integration allows vertically stacking several layers of devices with a unique connecting via density above a million/mm2. This results in increased density with no extra cost associated to transistor scaling, while benefiting from gains in power and performance thanks to wire-length reduction. CoolCube™ technology leads to high performance top transistors with Thermal Budgets (TB) compatible with bottom MOSFET integrity. Key enablers are the dopant activation by Solid Phase Epitaxy (SPE) or nanosecond laser anneal, low temperature epitaxy, low k spacers and direct bonding. New data on the maximal TB bottom MOSFET can withstand (with high temperatures but short durations) offer new opportunities for top MOSFET process optimization.

Journal ArticleDOI
TL;DR: Calculation of the solubility parameters of the nanofiller surface polymers and of the PU segments portend a better interfacial adhesion for CNF based nanocomposites compared to CNC.

Proceedings Article
08 Jul 2015
TL;DR: A dynamic thread and memory placement algorithm in Linux is designed and implemented that delivers similar or better performance than the best static placement and up to 218% better performance when the placement is chosen randomly.
Abstract: It is well known that the placement of threads and memory plays a crucial role for performance on NUMA (Non-Uniform Memory-Access) systems. The conventional wisdom is to place threads close to their memory, to collocate on the same node threads that share data, and to segregate on different nodes threads that compete for memory bandwidth or cache resources. While many studies addressed thread and data placement, none of them considered a crucial property of modern NUMA systems that is likely to prevail in the future: asymmetric interconnect. When the nodes are connected by links of different bandwidth, we must consider not only whether the threads and data are placed on the same or different nodes, but how these nodes are connected. We study the effects of asymmetry on a widely available ×86 system and find that performance can vary by more than 2× under the same distribution of thread and data across the nodes but different inter-node connectivity. The key new insight is that the best-performing connectivity is the one with the greatest total bandwidth as opposed to the smallest number of hops. Based on our findings we designed and implemented a dynamic thread and memory placement algorithm in Linux that delivers similar or better performance than the best static placement and up to 218% better performance than when the placement is chosen randomly.

Journal ArticleDOI
TL;DR: An algorithm for estimating the relation between PAN and MS images directly from the available data through an efficient optimization procedure is developed and is shown to outperform several very credited state-of-the-art approaches for the extraction of the details used in the current literature.
Abstract: Many powerful pansharpening approaches exploit the functional relation between the fusion of PANchromatic (PAN) and MultiSpectral (MS) images. To this purpose, the modulation transfer function of the MS sensor is typically used, being easily approximated as a Gaussian filter whose analytic expression is fully specified by the sensor gain at the Nyquist frequency. However, this characterization is often inadequate in practice. In this paper, we develop an algorithm for estimating the relation between PAN and MS images directly from the available data through an efficient optimization procedure. The effectiveness of the approach is validated both on a reduced scale data set generated by degrading images acquired by the IKONOS sensor and on full-scale data consisting of images collected by the QuickBird sensor. In the first case, the proposed method achieves performances very similar to that of the algorithm that relies upon the full knowledge of the degrading filter. In the second, it is shown to outperform several very credited state-of-the-art approaches for the extraction of the details used in the current literature.

Journal ArticleDOI
TL;DR: In this article, the authors present an attempt to supplement the FAST code system with a novel solver characterized by tight coupling between the different equations, parallel computing capabilities, adaptive time-stepping and more accurate treatment of some of the phenomena involved in a reactor transient.


01 Jan 2015
TL;DR: In this paper, the effect of asymmetric interconnections on performance of NUMA systems has been studied and a dynamic thread and memory placement algorithm has been proposed to improve the performance of a widely available ×86 system.
Abstract: It is well known that the placement of threads and memory plays a crucial role for performance on NUMA (Non-Uniform Memory-Access) systems. The conventional wisdom is to place threads close to their memory, to collocate on the same node threads that share data, and to segregate on different nodes threads that compete for memory bandwidth or cache resources. While many studies addressed thread and data placement, none of them considered a crucial property of modern NUMA systems that is likely to prevail in the future: asymmetric interconnect. When the nodes are connected by links of different bandwidth, we must consider not only whether the threads and data are placed on the same or different nodes, but how these nodes are connected. We study the effects of asymmetry on a widely available ×86 system and find that performance can vary by more than 2× under the same distribution of thread and data across the nodes but different inter-node connectivity. The key new insight is that the best-performing connectivity is the one with the greatest total bandwidth as opposed to the smallest number of hops. Based on our findings we designed and implemented a dynamic thread and memory placement algorithm in Linux that delivers similar or better performance than the best static placement and up to 218% better performance than when the placement is chosen randomly.

Journal ArticleDOI
TL;DR: The dynamic behavior of tau hydration water is found to be more mobile in tau fibers than in nonaggregated tau, which is suggested to promote fiber formation through entropic effects.
Abstract: The paired helical filaments (PHF) formed by the intrinsically disordered human protein tau are one of the pathological hallmarks of Alzheimer disease. PHF are fibers of amyloid nature that are composed of a rigid core and an unstructured fuzzy coat. The mechanisms of fiber formation, in particular the role that hydration water might play, remain poorly understood. We combined protein deuteration, neutron scattering, and all-atom molecular dynamics simulations to study the dynamics of hydration water at the surface of fibers formed by the full-length human protein htau40. In comparison with monomeric tau, hydration water on the surface of tau fibers is more mobile, as evidenced by an increased fraction of translationally diffusing water molecules, a higher diffusion coefficient, and increased mean-squared displacements in neutron scattering experiments. Fibers formed by the hexapeptide (306)VQIVYK(311) were taken as a model for the tau fiber core and studied by molecular dynamics simulations, revealing that hydration water dynamics around the core domain is significantly reduced after fiber formation. Thus, an increase in water dynamics around the fuzzy coat is proposed to be at the origin of the experimentally observed increase in hydration water dynamics around the entire tau fiber. The observed increase in hydration water dynamics is suggested to promote fiber formation through entropic effects. Detection of the enhanced hydration water mobility around tau fibers is conjectured to potentially contribute to the early diagnosis of Alzheimer patients by diffusion MRI.

Journal ArticleDOI
TL;DR: Aqueous co-dispersions of Lanthanum Strontium Manganite (LSM) and Yttria-Stabilized Zirconia (YSZ) were freeze-cast and partially sintered, resulting in anisotropic, hierarchically porous composites for potential applications as solid oxide fuel cell (SOFC) cathodes as discussed by the authors.
Abstract: Aqueous co-dispersions of Lanthanum Strontium Manganite (LSM) and Yttria-Stabilized Zirconia (YSZ) were freeze-cast and partially sintered, resulting in anisotropic, hierarchically porous composites for potential applications as solid oxide fuel cell (SOFC) cathodes. The uniform phase dispersion was validated using SEM–EDS and FIB-SEM tomography. Using reconstructed 3D images of samples sintered at 1200 and 1300 °C, the effect of sintering on phase connectivity, triple phase boundary (TPB) density and phase tortuosity was explored. The higher sintering temperature resulted in lower TPB density and, less open pore volume but decreased tortuosity for both the LSM and YSZ due to densification of the structure at high temperatures. Due to the unique double-sided morphology of the freeze-cast walls and the benefits gained from less tortuous percolation paths, a decrease in TPB density and open porosity from elevated sintering temperatures may not degrade the electrochemical performance as much as it would for a standard isotropic microstructure.

Journal ArticleDOI
TL;DR: The aim is to highlight the recent progress in the field of LbL films for biomedical applications and to discuss the various ways to spatially and temporally control the biochemical and mechanical properties of multilayers.
Abstract: Introduced in the '90s by Prof. Moehwald, Lvov, and Decher, the layer-by-layer (LbL) assembly of polyelectrolytes has become a popular technique to engineer various types of objects such as films, capsules and free standing membranes, with an unprecedented control at the nanometer and micrometer scales. The LbL technique allows to engineer biofunctional surface coatings, which may be dedicated to biomedical applications in vivo but also to fundamental studies and diagnosis in vitro. Initially mostly developed as 2D coatings and hollow capsules, the range of complex objects created by the LbL technique has greatly expanded in the past 10 years. In this Review, the aim is to highlight the recent progress in the field of LbL films for biomedical applications and to discuss the various ways to spatially and temporally control the biochemical and mechanical properties of multilayers. In particular, three major developments of LbL films are discussed: 1) the new methods and templates to engineer LbL films and control cellular processes from adhesion to differentiation, 2) the major ways to achieve temporal control by chemical, biological and physical triggers and, 3) the combinations of LbL technique, cells and scaffolds for repairing 3D tissues, including cardio-vascular devices, bone implants and neuro-prosthetic devices.

Proceedings ArticleDOI
04 Nov 2015
TL;DR: The MRTA framework provides a general approach to timing verification for multicore systems that is parametric in the hardware configuration and so can be used at the architectural design stage to compare the guaranteed levels of performance that can be obtained with different hardware configurations.
Abstract: In this paper, we introduce a Multicore Response Time Analysis (MRTA) framework. This framework is extensible to different multicore architectures, with various types and arrangements of local memory, and different arbitration policies for the common interconnects. We instantiate the framework for single level local data and instruction memories (cache or scratchpads), for a variety of memory bus arbitration policies, including: Round-Robin, FIFO, Fixed-Priority, Processor-Priority, and TDMA, and account for DRAM refreshes. The MRTA framework provides a general approach to timing verification for multicore systems that is parametric in the hardware configuration and so can be used at the architectural design stage to compare the guaranteed levels of performance that can be obtained with different hardware configurations. The MRTA framework decouples response time analysis from a reliance on context independent WCET values. Instead, the analysis formulates response times directly from the demands on different hardware resources.

Journal ArticleDOI
TL;DR: Results of extensive molecular simulations that mimic a number of features of the experimental vapor deposition process are presented, including a mechanism based on distinct orientations observed at equilibrium near the surface of the film, which get trapped within the film during the non-equilibrium process of vapor deposition.
Abstract: Enhanced kinetic stability of vapor-deposited glasses has been established for a variety of glass organic formers. Several recent reports indicate that vapor-deposited glasses can be orientationally anisotropic. In this work, we present results of extensive molecular simulations that mimic a number of features of the experimental vapor deposition process. The simulations are performed on a generic coarse-grained model and an all-atom representation of N,N′-bis(3-methylphenyl)-N,N′-diphenylbenzidine (TPD), a small organic molecule whose vapor-deposited glasses exhibit considerable orientational anisotropy. The coarse-grained model adopted here is found to reproduce several key aspects reported in experiments. In particular, the molecular orientation of vapor-deposited glasses is observed to depend on substrate temperature during deposition. For a fixed deposition rate, the molecular orientation in the glasses changes from isotropic, at the glass transition temperature, Tg, to slightly normal to the substrate at temperatures just below Tg. Well below Tg, molecular orientation becomes predominantly parallel to the substrate. The all-atom model is used to confirm some of the equilibrium structural features of TPD interfaces that arise above the glass transition temperature. We discuss a mechanism based on distinct orientations observed at equilibrium near the surface of the film, which get trapped within the film during the non-equilibrium process of vapor deposition.

Journal ArticleDOI
TL;DR: The results showed that the more crosslinked FS membranes enabled a more efficient myoblast differentiation in myotubes, and it was shown that a tunable amount of BMP-2 can be loaded into and subsequently released from the membranes, depending on the crosslinking degree and the initial B MP-2 concentration in solution.

Journal ArticleDOI
01 Dec 2015-Talanta
TL;DR: This communication describes a simple, low-cost, adaptable, and portable method for patterning paper and subsequent use of the patterned paper in diagnostic tests and demonstrates the reproducibility of assays on these devices.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed 2357 basal icequakes that were recorded at Glacier d'Argenti´ere (Mont-Blanc Massif) between February and November of 2012, and that are likely to be associated with basal sliding.
Abstract: While basal icequakes associated with glacier motion have been detected under Antarctica for several decades, there remains very little evidence of stick-slip motion for Alpine glaciers. Here we analyzed 2357 basal icequakes that were recorded at Glacier d’Argenti`ere (Mont-Blanc Massif) between February and November of 2012, and that are likely to be associated with basal sliding. These events have been classified into 18 multiplets, based on their waveforms. The strong similarity of the waveforms within each multiplet suggests an isolated repeating source. Despite this similarity, the peak amplitude within each multiplet varies gradually in time, by up to a factor of 18. The distribution of these events in time is relatively complex. For long time scales we observe progressive variations in the amplitudes of events within each multiplet. For intermediate time scales (hours), the events occur regularly in time, with typical return times of several minutes up to several hours. For short time scales (from 0.01 to 100 s), the largest multiplet shows clus- tering in time, with a power-law distribution of the interevent times. The location of these events and their focal mechanisms are not well constrained, because most of these events were detected by a single seismometer. Nevertheless, the locations can be estimated with an accuracy of a few tens of meters using a polarization analysis. The estimated average depth of the basal events is 179 m, which is in good agreement with the estimated glacier thickness. The relative changes in distance between the source and the sensor can be measured accurately by correlating separately the P-wave and S-wave parts of the seismograms of each event with the template waveforms, which are obtained by averaging the signals within each multiplet. We observed small variations in the times between the P-wave and the S-wave of up to 0.6 ms over 50 days. These variations cannot be explained by displacement of the sensor with respect to the glacier, but might be due to small changes in the seismic wave velocities with time. Finally, we found using numerical simulations that the observed signals are better explained by a horizontal shear fault with slip parallel to the glacier flow, than by a tensile fault. These results suggest that the basal events are associated with stick-slip motion of the glacier over rough bedrock. The rupture length and the slip are difficult to estimate. Nonetheless, the rupture length is likely to be of the order of meters, and the total seismic slip accumulated over one day might be as large as the glacier motion during the most active bursts.