scispace - formally typeset
Search or ask a question

Showing papers by "Instituto Superior Técnico published in 2023"


Journal ArticleDOI
TL;DR: In this article , the potential of the active PLA films to extend foods shelf-life was tested in almonds and beef, and the results showed that these PLA active packages can contribute for delaying lipid oxidation in foodstuffs with high fat content.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors reveal the recent research trends in the consolidation of stone-built heritage and discuss the advantages and drawbacks of the options and strategies followed by researchers over the last 10 years.
Abstract: This work aims to reveal the recent research trends in the consolidation of stone-built heritage and discuss the advantages and drawbacks of the options and strategies followed by researchers over the last 10 years. Peer-reviewed articles were used to build a database and analyze the details of the stone samples (chemical nature, type of voids, and condition), treatment protocols (application methods and consolidation products), and testing methods to assess the strengthening results of the treatments. In addition, the reported increments in the mechanical properties were also examined to reveal the strengthening capabilities of recent consolidation treatments. The statistical treatment of the results allowed pinpointing the stone varieties that need more frequent consolidation actions (limestone, biocalcarenite, and sandstone) and the aspects that make them more difficult and riskier. Other tendencies were discussed, for example, the predominant use of sound samples over decayed samples (61% vs. 39%) or the predominant use of alkoxysilanes (~46%) over other families of consolidants (e.g., nanolime, ~21%). The current consolidation treatments were found to improve stone strength; however, the most problematic issue in state-of-the-art is the difficulty of identifying high-risk situations of over-consolidation or poor distribution in depth because of either the lack of testing or limitations of the various assessment techniques.

2 citations


Posted ContentDOI
12 Apr 2023
TL;DR: In this article , the authors describe the numerical dataset of hydrometric variables that characterize a flood event occurred in February 2016 in the Portuguese Águeda river, shortly defined as Agueda.2016Flood.
Abstract: Abstract. Floods are among the most common natural disasters responsible for severe damages and human losses. Combining numerical modelling with user-friendly tools for geographically referenced data has been adopted to increase preparedness and reduce vulnerabilities. This paper describes the numerical dataset of hydrometric variables that characterize a flood event occurred in February 2016 in the Portuguese Águeda river, shortly defined as Agueda.2016Flood. The dataset was numerically produced and managed through the RiverCure Portal, a collaborative web platform connected to a validated shallow-water model featuring modelled dynamic bed geometries and sediment transport. The dataset Agueda.2016Flood can be used as a starting point to design other experiments and tools, and to learn and apply the proposed approach by directly using the RiverCure Portal. This dataset includes modelled hydrodynamic data (output data) and the topographic, geometrical, land-use and hydrologic data (input data) necessary to carry out the numerical simulation of the flood event.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors studied collisions of Gaussian mass-density blobs in a holographic plasma, using a large D effective theory, as a model for holographic shockwave collisions.
Abstract: A bstract We study collisions of Gaussian mass-density blobs in a holographic plasma, using a large D effective theory, as a model for holographic shockwave collisions. The simplicity of the effective theory allows us to perform the first 4+1 collisions in Einstein-Maxwell theory, which are dual to collisions of matter with non-zero baryonic number. We explore several collision scenarios with different blob shapes, impact parameters and charge values and find that collisions with impact parameter below the transverse width of the blobs are equivalent under rescaling. We also observe that charge weakly affects the rest of quantities. Finally, we study the entropy generated during collisions, both by charge diffusion and viscous dissipation. Multiple stages of linear entropy growth are identified, whose rates are not independent of the initial conditions.

1 citations


Journal ArticleDOI
TL;DR: In this article , the Littlest Modular Seesaw (LMS) model was extended to include modular symmetries for neutrino oscillation data, including predictive relations between leptonic mixing angles and the ratio of light neutrinos masses, which non-trivially agree with the experimental values.
Abstract: We present the first complete model of the Littlest Modular Seesaw, based on two right-handed neutrinos, within the framework of multiple modular symmetries, justifying the use of multiple moduli fields which take their values at 3 specific stabilizers of $\Gamma_4 \simeq S_4$, including a new phenomenological possibility. Using a semi-analytical approach, we perform a $\chi^2$ analysis of each case and show that good agreement with neutrino oscillation data is obtained, including predictive relations between the leptonic mixing angles and the ratio of light neutrino masses, which non-trivially agree with the experimental values. It is noteworthy that in this very predictive setup, the models fit the global fits of the experimental data remarkably well, both with and without the Super-Kamiokande atmospheric data, for both choices of stabilizers. By extending the model to include a weighton and the double cover group $\Gamma'_4 \simeq S'_4$, we are able to also account for the hierarchy of the charged leptons using modular symmetries, without altering the neutrino predictions.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a PDE-based node-to-element contact formulation is proposed to solve nonsmooth contact problem with continuous gap using finite element discretizations, which can also obtain universal uniqueness of contact detection.
Abstract: Abstract We introduce a PDE-based node-to-element contact formulation as an alternative to classical, purely geometrical formulations. It is challenging to devise solutions to nonsmooth contact problem with continuous gap using finite element discretizations. We herein achieve this objective by constructing an approximate distance function (ADF) to the boundaries of solid objects, and in doing so, also obtain universal uniqueness of contact detection. Unilateral constraints are implemented using a mixed model combining the screened Poisson equation and a force element, which has the topology of a continuum element containing an additional incident node. An ADF is obtained by solving the screened Poisson equation with constant essential boundary conditions and a variable transformation. The ADF does not explicitly depend on the number of objects and a single solution of the partial differential equation for this field uniquely defines the contact conditions for all incident points in the mesh. Having an ADF field to any obstacle circumvents the multiple target surfaces and avoids the specific data structures present in traditional contact-impact algorithms. We also relax the interpretation of the Lagrange multipliers as contact forces, and the Courant–Beltrami function is used with a mixed formulation producing the required differentiable result. We demonstrate the advantages of the new approach in two- and three-dimensional problems that are solved using Newton iterations. Simultaneous constraints for each incident point are considered.



Posted ContentDOI
15 May 2023
TL;DR: In this paper , a new clustering strategy was developed and tested using Self-Organizing Maps (SOM), an unsupervised Artificial Neural Network (ANN) type, for identifying zones with similar contamination characteristics within an aquifer.
Abstract: A new clustering strategy was developed and tested using Self-Organizing Maps (SOM), an unsupervised Artificial Neural Network (ANN) type, for identifying zones with similar contamination characteristics within an aquifer. The Gabros de Beja aquifer system (GBAS), located in the Alentejo region, Portugal, was selected as a case study due to its vulnerability to diffuse pollution from intensive agriculture. The proposed methodology consists of: (a) selection of the most representative groundwater contaminants in the aquifer area (i.e., nitrates, sulfates and chlorides); (b) determination of the Natural Background Level (NBL) of the selected groundwater compounds; (c) computation of the ratio between the median concentrations of the groundwater compounds being analyzed and their respective NBL concentration; and finally, (d) application of the SOM clustering technique to group homogenous contaminated areas within the aquifer. The NBL illustrates what thresholds are likely signs of anthropogenic effect by indicating how high or low a parameter's value would be expected under natural geogenic conditions and therefore was used as a first normalization of the dataset. For this methodology, the NBL was computed as the 90th percentile concentration of the selected compounds in piezometers within the study area that presented a median nitrate concentration smaller than 10 mg/L. Nitrate, sulfate and chloride concentration medians from 45 piezometers were used. The results show that the SOM network classified the piezometers into six classes (CL1 to CL6). The least contaminated clusters were CL1 (8) and CL4 (17), with all three compounds presenting median concentrations around 50 mg/L, which for nitrate is the threshold for drinking water limits. CL5 (5) reached median nitrate concentrations above 100 mg/L, while chlorides and sulfates remained below 50 mg/L. CL2 (6) showed an increase in chloride concentration to 100 mg/L, with the other two compounds' concentrations below 65 mg/L. CL3 (3) presented the highest salinization levels reaching chloride concentrations above 180 mg/L, with sulfates around 80 mg/L and nitrates around 50 mg/L. Finally, CL6 (6) presented median levels of the three compounds above 80 mg/L. The most contaminated groups (CL3, CL5 and CL6) were present in sedimentary and weathered metamorphic lithologies, which present high hydraulic conductivities, coinciding either with urban or agricultural areas associated with large-scale irrigation schemes, reinforcing the anthropogenic source of the contaminants. Hence, this study presented a clustering framework that, by reducing the dimensionality of the original dataset, helps to establish a priority list of polluted areas with different degrees of contamination, which is indeed essential for implementing monitoring and management measures for attenuating groundwater pollution.  

Journal ArticleDOI
TL;DR: In this paper , the authors used seismic data as well as surface abundances to model the supergiant α-Ori, with the goal of setting an upper bound on the axion-photon coupling constant g a γ .
Abstract: Abstract In this work, for the first time, we use seismic data as well as surface abundances to model the supergiant α -Ori, with the goal of setting an upper bound on the axion–photon coupling constant g a γ . We find that, in general, stellar models with g a γ ∈ [0.002; 2.0] × 10 −10 GeV −1 agree with the observational data, but beyond that upper limit, we do not find stellar models that are compatible with the observational constraints and the current literature. From g a γ = 3.5 × 10 −10 GeV −1 on, the algorithm did not find any fitting models. Even so, all the axionic models considered present distinct internal profiles from the reference case, without axions. Moreover, as the axion energy losses become more significant, the behavior of the stellar models becomes more diversified, even with very similar input parameters. Nonetheless, the consecutive increments of g a γ still show systematic tendencies, resulting from the axion energy losses. Moreover, we establish three important conclusions: (1) the increased luminosity and higher neutrino production are measurable effects, possibly associated with axion energy losses; (2) stellar models with axion energy loss show a quite distinct internal structure; and (3) the importance of future asteroseismic missions in observing low-degree nonradial modes in massive stars is emphasized—as internal gravity waves probe the near-core regions, where axion effects are most intense. Thus, more seismic data will allow us to constrain g a γ better and to prove or dismiss the existence of axion energy loss inside massive stars.

Journal ArticleDOI
TL;DR: In this paper , the authors deal with the development of a low momentum whirling laminar plume that results from the interaction between a plume and a circulating flow imposed by a cylindrical rotating screen.

Posted ContentDOI
15 May 2023
TL;DR: In this article , a method for acquiring 2D scour profiles was developed to enable continuous monitoring of the scour phenomenon using a computer vision technique, namely homography transformation, which relates the coordinates of points in one image to the corresponding points in another image through a Python routine.
Abstract: Scour monitoring in experimental environments relies primarily on visual point-wise measurements that may provide less accurate estimates of scour and its effects. To fulfil these issues many studies and methods has been developed to analyse scour surfaces. Recently, the use of 3D point clouds and digital elevation models has proven to be an effective method for describing scour around bridge foundations with a high degree of accuracy. This is especially true under drained conditions. Therefore, it has become necessary to develop a system that can continuously monitor the development of scour at bridge foundations without interrupting the flow. Few studies have addressed continuous monitoring of the scouring process. These include: (i) photogrammetry-based methods using two cameras and algorithms for image calibration, rectification, and stereo-triangulation, and (ii) a laser-based approach using both a laser source and a camera. As a result of these studies, further researches need to be developed in order to effectively monitor scouring process by using the increasing technology of submersible cameras and underwater processing capabilities. In this study, a novel method for acquiring 2D scour profiles was developed to enable continuous monitoring of the scour phenomenon. The developed technique uses a computer vision technique, namely homography transformation, which relates the coordinates of points in one image to the coordinates of corresponding points in another image through a Python routine. This algorithm also considered the critical issues inherent in any underwater image processing technique, such as correcting for perspective, distortion, scaling, and camera lens rotation. In the laboratory, four cameras were used to collect synchronized underwater images of the scour holes formed and the affected surrounding areas around an oblong bridge pier model due to the local scour phenomenon. By processing each image sequence and running the Python code to measure the depth of the border line between the sand and the bridge foundation model at specific times during the scouring experiment, it was possible to obtain the evolution of the scour holes in the form of 2D bed profiles. The accuracy of the developed algorithm to study the bed morphology in the vicinity of bridge piers during the scouring process showed promising results compared to point-wise scour depth measurements.This work was partially funded by the Portuguese Foundation for Science and Technology (FCT) through Project DikesFPro PTDC/ECI-EGC/7739/2020 and through CERIS funding UIDB/04625/2020.

Journal ArticleDOI
01 Mar 2023-Sensors
TL;DR: In this article , a new eddy current testing array probe and readout electronics that target the layer-wise quality control in powder bed fusion metal additive manufacturing is presented. But the proposed design approach brings important benefits to the sensors' number scalability, exploring alternative sensor elements and minimalist signal generation and demodulation.
Abstract: This work presents a new eddy current testing array probe and readout electronics that target the layer-wise quality control in powder bed fusion metal additive manufacturing. The proposed design approach brings important benefits to the sensors’ number scalability, exploring alternative sensor elements and minimalist signal generation and demodulation. Small-sized, commercially available surface-mounted technology coils were evaluated as an alternative to usually employed magneto-resistive sensors, demonstrating low cost, design flexibility, and easy integration with the readout electronics. Strategies to minimize the readout electronics were proposed, considering the specific characteristics of the sensors’ signals. An adjustable single phase coherent demodulation scheme is proposed as an alternative to traditional in-phase and quadrature demodulation provided that the signals under measurement showed minimal phase variations. A simplified amplification and demodulation frontend using discrete components was employed together with offset removal, vector amplification, and digitalization implemented within the microcontrollers’ advanced mixed signal peripherals. An array probe with 16 sensor coils and a 5 mm pitch was materialized together with non-multiplexed digital readout electronics, allowing for a sensor frequency of up to 1.5 MHz and digitalization with 12 bits resolution, as well as a 10 kHz sampling rate.


Posted ContentDOI
15 May 2023
TL;DR: In this paper , the authors evaluated the soil water and salt budgets in nine commercial orchards located in the Roxo Irrigation District (RID), in southern Portugal, using the multiple ion chemistry module available in the HYDRUS-1D model during 2019 and 2020 growing seasons.
Abstract: Secondary salinization has long been reported in the Roxo Irrigation District (RID), in southern Portugal, due to use of saline prone irrigation water and the existence of poor structured soils. This study evaluates the soil water and salt budgets in nine commercial orchards located in the RID using the multiple ion chemistry module available in the HYDRUS-1D model during the 2019 and 2020 growing seasons. The study crops were almond, olive, citrus, and pomegranate. The model successfully simulated soil water contents measured in the different fields along two seasons. There was a clear underestimation of the ECe in some fields while simulations of SAR were found to be acceptable. Modeling errors were mostly associated to missing information on fertigation events rather than difficulties in simulating the effect of irrigation water quality on soil quality. The water and salt balances were also computed for the 1979-2020 period. Considering the probability of distribution of salt accumulation during this period, the risk of salt accumulation was very high, except in the citrus areas. The factors influencing the salinity build-up in the study sites were the irrigation strategy, the seasonal irrigation and rainfall depths, the crop growing period, rainfall distribution in the late and non-growing seasons, soil drainage conditions, and irrigation water quality. On the other hand, for current climate conditions and irrigation water quality, the risk of soil salinity levels affecting crop development and yields were found to be minor. Only in two of the study sites, there was the need to promote salt leaching following strategies that differed between locations. This study further aims to promote sustainable irrigation management practices through the better use of soil and water resources in the Alentejo region of southern Portugal.


Posted ContentDOI
15 May 2023
TL;DR: In this article , a gap-filling approach using Kriging-based methods (Ordinary kriging and Simple Cokriging) is presented and compared to a linear regression approach proposed by the Food and Agriculture Organization (FAO method).
Abstract: In hydro meteorological temporal datasets, the lack of data is a common problem that can be caused by a variety of factors, including sensor malfunction, errors in measurement, and faults in data acquisition from the operators. Because complete time series are necessary for conducting trustworthy analysis, finding efficient solutions to this issue is crucial. In this work, a gap-filling approach using Kriging-based methods (Ordinary Kriging and Simple Cokriging) is presented and compared to a linear regression approach proposed by the Food and Agriculture Organization (FAO method). The proposed procedure consists of fitting semi-variogram models for each month using the available daily rainfall collected at all stations and averaged for the specific month in the reference period. The advantages are that only 12 monthly semi-variograms have to be built rather than one for each missing day of the dataset and that a greater amount of data at a time can be processed. Then, the Ordinary Kriging and Cokriging are used to estimate the daily precipitation where it is missed using the semi-variograms of the month of interest. The Cokriging method is applied considering the elevation data as the second variable. The FAO approach fills the gaps in rainfall time series by means of a linear relationship between the station that presents missing data and the best correlated station that has data gathered at the gap time. The approaches were compared using daily rainfall data from 60 rain gauges from the Portuguese case study of the InTheMED project for a 30-year reference period (1976-2005). To evaluate the effectiveness of the proposed approaches, one year of data (1985) was removed from some stations; missing precipitation data were estimated using data from the remaining precipitation stations by applying the three procedures. A cross-validation process and an analysis of the error statistics have been considered to determine the accuracy of the estimation for the three gap-filling methods. The outcomes pointed out that the geostatistical approaches outperformed the FAO method in daily estimation. The presented approach performed well in the study area, especially for the Ordinary Kriging, which well-estimated the daily missing data with a low computational effort. However, Cokriging did not significantly improve the estimates.The work presented herein is supported by the PRIMA programme under grant agreement No. 1923, project Innovative and Sustainable Groundwater Management in the Mediterranean (InTheMED). The PRIMA programme is supported by the European Union.

Journal ArticleDOI
TL;DR: In this paper , a modular data acquisition solution with digital integration is proposed for COMPASS-U tokamak, which allows a high degree of flexibility and scalability with state-of-the-art performance.

Posted ContentDOI
15 May 2023
TL;DR: In this paper , the authors evaluated how extreme the 2022 fire season was when compared with the period 1979-2021 over Europe, and proposed methods comprising the analysis of fire-related products and atmospheric variables to evidence the fireprone weather conditions.
Abstract: Over the summer of 2022, Europe experienced exceptional wildfire activity, with fires occurring more frequently and intensively, mainly in Spain, France, and Portugal. Together these countries registered more than 470 000 hectares of the total 786 000 burnt area in the European Union, accordingly to the estimates of the European Forest Fire Information System (EFFIS) for this fire season.  Southern Europe is a widely known climate change hotspot resulting in heatwaves, droughts, and wildfire activity (increase in the number and severity of fires, burnt area, and longer fire seasons) although severe droughts and heatwaves have been expanding and worsening in central and northern Europe, increasing fire risk. This work aims to evaluate how extreme the 2022 fire season was when compared with the period 1979-2021 over Europe. The proposed methods comprise the analysis of fire-related products and atmospheric variables to evidence the fire-prone weather conditions. The European Centre for Medium-Range Weather Forecast (ECMWF) ERA5 reanalysis dataset of Fire Weather Index (FWI) and air temperature, relative humidity and wind products are used. FWI is part of a dataset from the Canadian Fire Weather Index System, and is defined as a numerical rating of the potential frontal fire intensity, that indicates fire intensity by combining the rate of fire spread with the amount of fuel being consumed. The Standardized Precipitation Evapotranspiration Index (SPEI) at time-scales of 1 to 6 months was used to assess drought conditions. Results highlight the new fire dynamics in Europe since climate change effects are leading to new emergent hot spots (central and northern Europe), not so well known as the Mediterranean Basin. This is extremely important to allow the assessment of fire danger activity as well as the characteristics of wildfires and improve the monitoring, planning, and mitigation activities.


Journal ArticleDOI
TL;DR: In this article , a more GPU parallelizable version of the Topo-Automaton Clustering algorithm was implemented within AthenaMT, the software framework of the ATLAS trigger, and its results were compared to those of the standard CPU algorithm to ensure physical validity is maintained.
Abstract: Abstract Given the upcoming High-Luminosity LHC Upgrade, the performance requirements for the trigger systems associated with the LHC experiments will increase due to the larger volume of data to be processed. One of the possibilities that the ATLAS Collaboration is evaluating for upgrading the software-based portion of its trigger system is the use of Graphical Processing Units as hardware accelerators. The present work focuses on the GPU acceleration of the Topological Clustering algorithm, which is used to reconstruct calorimeter showers by grouping cells according to their signal-to-noise ratio. A more GPU parallelizable version of the Topological Clustering, called Topo-Automaton Clustering, was implemented within AthenaMT, the software framework of the ATLAS trigger, and its results were compared to those of the standard CPU algorithm to ensure physical validity is maintained. Time measurements suggest an average improvement of the event processing time by a factor between 3.5 and 5.5 (depending on the kind of the event), though less than 20% of that time corresponds to the algorithm itself, suggesting that the main bottleneck lies in data transfers and conversions.

Posted ContentDOI
16 Mar 2023
TL;DR: In this paper , the authors present a discussion about challenges to overcome on tsunami risk management, such as sophistication of earthquake and tsunami numerical schemes; uncertainty-awareness and future needs to develop unanimous and systematic measures to reduce uncertainties associated with geophysical and engineering processes; pros and cons of using HPC resources towards safety and operational performance levels; and applicability to critical infrastructures.
Abstract: <p>Regional and local tsunami sources are a cliché of scientific disaggregation. From the physical perspective, despite emerging studies on cascading hazard and risk, hazard characterization often sees the tsunami as an individual event without addressing the effects of the primary hazard (typically a high-magnitude earthquake) that triggered the tsunami. Moreover, tsunami effects are partitioned into single processes: hydraulic effects or induced effects, such as debris transport, which is a representative approach often assumed when treating complex phenomena. From a technical perspective, describing cascading hazards and translating them into a composite loading pattern for natural and built environments is challenging, and the difficulty increases exponentially when fluid-soil-interactions are considered. From a modeling perspective, physical and numerical simulations are employed to complement scarce databases of extreme tsunami events. However, the level of modeling sophistication deemed necessary to reproduce such complex phenomena is elevated and there are uncertainties associated with natural phenomena and their modelling, ranging from the genesis of the tsunami to structural and community response. The number and influencing potential of uncertainties pose an extraordinary concern when developing mitigation measures. From a risk management perspective, cascading natural and anthropogenic hazards constitutes a challenge for combining safety requirements with financial, social, and ecological concerns. Risk management can benefit from strengthening the ties between natural hazards and engineering practitioners, linking science and industry, and promoting dialogue between risk analysts and policy-makers.</p> <p>Ultimately, risk management requires heterogeneous data and information from real and synthetic origins. Yet, the quality of data used for risk management may often depend on the computational resources (in terms of performance, energy, and storage capacity) needed to simulate complex multi-scale and multi-physics phenomena, as well as to analyze large data sets. For example, the quality of the numerical solutions is often dependent on the amount of data used to calibrate the models and the runtime of the models needs to be aligned with time constraints (ex.: faster than real time tsunami simulations for early warning systems). The North American platform Hazus is capable of producing risk maps. In the European risk assessment, there is a lack of integration and interaction of results from GEM and SERA, and TSUMAPS-NEAM projects, intended to develop seismic and tsunami hazard studies, respectively. The computational modeling aids in the advancement of scientific knowledge by aggregating the numerous factors involved and their translation to tsunami risk management policies.</p> <p>A global trend in geosciences and engineering is to develop sophisticated numerical schemes and to build computational facilities that can solve them, thereby aiming to reduce uncertainty levels and preparing the scientific (r)-evolution for the so-called Exascale Era. The present work aims to gather multidisciplinary perspectives on a discussion about: 1) challenges to overcome on tsunami risk management, such as sophistication of earthquake and tsunami numerical schemes; 2) uncertainty-awareness and future needs to develop unanimous and systematic measures to reduce uncertainties associated with geophysical and engineering processes; 3) pros and cons of using HPC resources towards safety and operational performance levels; and 4) applicability to critical infrastructures.</p>

Posted ContentDOI
15 May 2023
TL;DR: Azevedo et al. as mentioned in this paper detected double diffusion in temperature and salinity profiles plotted as a function of depth, noticing that the interfaces of mixing followed by layers of well-mixed temperature and salt are well defined as a step structure and were validated as double diffusion by calculating the Turner angle and density ratio at those depths.
Abstract: Seismic oceanography as remote sensing of the ocean structure by multichannel reflection seismic method can provide high-resolution images enabling the study of fine-scale ocean processes along large distances.The seismic acoustic response depends on differences in ocean temperature and salinity, and the resulting seismic images track the interfaces between those thermohaline layers both laterally and in depth. The structural interpretation of observed seismic reflections provides valuable oceanographic insights to understand mixing processes and phenomena occurring at different water column depths.Three parallel 2D multichannel seismic reflection profiles acquired by the Portuguese Task Force for the Extension of the Continental Shelf in the Madeira Abyssal Plain (MAP), profiles covering 300km and ~100km apart from each other, dating from 2006, were processed to enhance the amplitudes of the water column (Azevedo, L. et al., 2021) and analyzed jointly with conductivity-temperature-depth probes (CTDs) from 2002 and 2005 acquired by Poseidon research vessel.The structure of the water column in this area is characterized by the intrusion of Mediterranean Outflow Waters (MOW), warmer and salty water mass expressing between the 500 and 1500 m depth, and overlaying Subarctic Intermediate Water where temperature and salinity decrease in depth. Due to the differences in temperature and salinity gradients, the MAP region is auspicious for developing double diffusion, specifically thermohaline staircases (van der Boog, C. et al., 2021). Double diffusion is shown to influence the efficiency of vertical mixing of the different water masses; it affects the vertical transport of nutrients, temperature, and salt and contributes to ocean circulation, which is intrinsically connected to the control of the earth’s climate. Nevertheless, it is still lacking information.We detected the thermohaline staircases expression in temperature and salinity profiles plotted as a function of depth, noticing that the interfaces of mixing followed by layers of well-mixed temperature and salinity are well defined as a step structure and were validated as double diffusion by calculating the Turner angle and Density Ratio at those depths.Simultaneously, the seismic profiles are characterized by continuous sub-horizontal reflections between the ~1200 to 2000 meters of depth. By correlating the CTD profiles with the seismic images, it is noticeable that the staircases on the vertical profiles correspond to the reflections on the seismic at the expected depths and are covering almost the entirety of seismic profiles.Since those reflections are present in the three parallel seismic profiles, we use them to predict the lateral continuity of the step-like structures and build models of the incidence of double-diffusive thermohaline staircases in the region, contributing to the knowledge of those processes' extension and expression in the Madeira Abyssal plain.References:van der Boog, C. G., Dijkstra, H. A., Pietrzak, J. D., & Katsman, C. A. (2021). Double-diffusive mixing makes a small contribution to the global ocean circulation. Communications Earth & Environment, 2(1), 1-9.Azevedo, L., Matias, L., Turco, F., Tromm, R., & Peliz, Á. (2021). Geostatistical seismic inversion for temperature and salinity in the Madeira Abyssal Plain. Frontiers in Marine Science, 8, 685007.

Posted ContentDOI
15 May 2023
TL;DR: In this paper , the authors evaluated the utility of two types of flow-refuge by Iberian barbel (Luciobarbus bocagei) at an indoor flume.
Abstract: The artificial pulsed flows occurring downstream of hydropower plants due to electricity demand, i.e. hydropeaking, affect habitat selection by fish. This effect is particularly unknown for cyprinids, which are the most representative freshwater fish family in European rivers. This study aimed to evaluate the utility of two types of flow-refuge by Iberian barbel (Luciobarbus bocagei) at an indoor flume (6.5m x 0.7m x 0.8m) as a potential solution to mitigate the effects of pulsed flows associated to hydropower production. Based on previous comprehensive research conducted on cyprinids and with the results of this study, the best type of flow-refuge was selected, up-scaled and implemented downstream of small hydropower plants. Two different approach angles with the flume wall - 45⁰ and 70⁰ - were tested to assess the effectiveness of the created hydraulic conditions on attracting fish to the flow-refuge. For each type we tested a base flow event (7 l.s-1), simulating river natural conditions, and a peak flow event (60 l.s-1), simulating pulsed flows. For each setting, two flow-refuges (downstream and upstream) were installed in the flume and tested with a school of five Iberian barbels. The utility of the flow-refuges was assessed by the frequency and time of use by fish at two distinct flow-refuge locations i.e., downstream (area between the flume and the adjacent flow-refuge walls), and inside (the effective covered area of the flow-refuge). Blood glucose and lactate levels were quantified to identify potential physiological adjustments associated with the pulsed flows and the flow-refuge type. Preliminary results indicate that fish behavior differs according to flow event and the type of flow-refuge. The frequency of a single fish using the flow-refuge was higher in the 45⁰ refuge during pulsed flows than in the 70⁰. Overall, the average time spent inside the flow-refuges was higher during pulsed flows for both types and higher in the 45⁰ refuge.  After the 60 l.s-1 events, the blood glucose and lactate levels were higher than in the 7 l.s-1 events. In addition, lactate levels for the 45⁰ flow-refuge during the 60 l.s-1 events, were the highest when compared to 7 l.s-1 events. These results may be explained by the higher velocities created in the presence of the 45⁰ flow-refuge, shown by ADV results, that favoured individual use and rheotactic behaviour, setting off physiological adjustments, increasing residency time and the efficiency to use the flow-refuge.

Journal ArticleDOI
TL;DR: In this paper , the geography of minimal surfaces of general type admitting Z 2 2 $\mathbb {Z}_2^2$ -actions is studied, and it is shown that Gieseker's moduli space M K 2 , χ Ω(m K 2, χ ) $(K^2, \chi )$ contains surfaces admitting a Z 2 χ − 6 ≤ K 2 ≤ 8 χ−8 $2.
Abstract: In this note, the geography of minimal surfaces of general type admitting Z 2 2 $\mathbb {Z}_2^2$ -actions is studied. More precisely, it is shown that Gieseker's moduli space M K 2 , χ $\mathfrak {M}_{K^2,\chi }$ contains surfaces admitting a Z 2 2 $\mathbb {Z}_2^2$ -action for every admissible pair ( K 2 , χ ) $(K^2, \chi )$ such that 2 χ − 6 ≤ K 2 ≤ 8 χ − 8 $2\chi -6\le K^2\le 8\chi -8$ or K 2 = 8 χ $K^2=8\chi$ . The examples considered allow to prove that the locus of Gorenstein stable surfaces is not closed in the KSBA-compactification M ¯ K 2 , χ $\overline{\mathfrak {M}}_{K^2,\chi }$ of Gieseker's moduli space M K 2 , χ $\mathfrak {M}_{K^2,\chi }$ for every admissible pair ( K 2 , χ ) $(K^2, \chi )$ such that 2 χ − 6 ≤ K 2 ≤ 8 χ − 8 $2\chi -6\le K^2\le 8\chi -8$ .

Journal ArticleDOI
TL;DR: In this paper , the authors study the decision of a firm to undertake a one-time proactive preventive investment to limit the occurrence of future disruptions, taking into account different levels of liability.

Posted ContentDOI
15 May 2023
TL;DR: In this article , a watershed model was coupled with a reservoir model to simulate the effects of fires on drinking water supplies, using the outputs of the main streams as inputs to the reservoir branches.
Abstract: Wildfires are a threat to water security worldwide, due to the negative effect of the post-fire mobilization of sediments and associated nutrients and contaminants on the waterbodies located downstream of burned areas. Such impacts have been assessed in field studies and, more recently, also through modelling approaches. Models are valuable tools for anticipating the potential negative impacts of wildfires, allowing to test different environmental scenarios. The state of the art in post-fire model adaptation has shown that most studies simulate the hydrological and erosion response in the first post-fire year in situ, without considering the cascading effects on downstream waterbodies. In addition, few studies have evaluated the long-term impacts of wildfires, likely due to the limited available data. Among the existing gaps in post-fire modelling, ash transport has recently been identified as a priority. The lack of ash modelling studies has been ascribed to the limited understanding of ash behavior and the difficulties of incorporating ash-related processes into the structure of existing models.As a way to fulfill these research gaps and advance the state of the art in post-fire hydrological modeling, the authors provided several contributions in recent years.For instance, a watershed model has been coupled with a reservoir model to simulate the effects of fires on drinking water supplies, using the outputs of the main streams as inputs to the reservoir branches. As most simulations commonly end at the watershed outlet, a simple methodology was proposed to assess how the impacts on watercourses propagate to the drinking water supply inlet. The results showed that integrated modeling frameworks are critical for anticipating the off-site impacts of fires.Post-fire management can also influence the impacts of fires beyond the first post-fire rainfall events, when the soil is exposed and ash and sediment transport is greatest. Another modelling exercise evaluates the long-term impacts of different post-fire management options, more specifically terracing, mulching and natural recovery, on water availability and quality.As post-fire ash and sediment mobilization is typically limited to the duration of the rainfall events, which typically lasts for a few hours, hydrological models that run at a daily time-step can underestimate the environmental impacts of fires. To improve the knowledge of post-fire hydrological processes at event-based scale, two hydrological models (LISEM and MOHID) were calibrated, accounting for burn severity and initial soil moisture conditions before each specific rainfall event.The work done in the past years is expected to be of added value for the post-fire modeling community, providing future directions on post-fire hydrological modelling studies. 

Posted ContentDOI
15 May 2023
TL;DR: In this article , the authors collected open hydropeaking research questions from over 200 experts in river science, practice, and policy across the globe using an online survey available in five languages and used a systematic method of determining expert consensus (Delphi method) to identify 100 core questions related to the following thematic fields: hydrology, physico-chemical properties of water, river morphology and sedimentology, ecology and biology, socio-economics and energy markets, policy and regulation, as well as management and mitigation measures.
Abstract: Hydropeaking has received increasing attention in the last years, but many knowledge gaps remain, potentially hampering effective policy and management efforts in rivers under such type of hydropower production. In this study, we collected open hydropeaking research questions from over 200 experts in river science, practice, and policy across the globe using an online survey available in five languages. We used a systematic method of determining expert consensus (Delphi method) to identify 100 core questions related to the following thematic fields: (i) hydrology, (ii) physico-chemical properties of water, (iii) river morphology and sedimentology, (iv) ecology and biology, (v) socio-economics and energy markets, (vi) policy and regulation, as well as (vii) management and mitigation measures. The consensus list of questions shall inform and guide researchers in focusing their efforts to foster a better science-policy interface, thereby improving the sustainability of peak-operating hydropower in a variety of settings.

Journal ArticleDOI
TL;DR: In this paper , the authors designed and prepared novel hybrid membranes conjugated with IBF, thus avoiding its administration to end-stage renal disease (ESRD) patients, and two novel silicon precursors containing IBF were synthesized and, by the combination of a sol-gel reaction and the phase inversion technique, four monophasic hybrid integral asymmetric cellulose acetate/silica/IBF membranes were produced.
Abstract: Currently available hemodialysis (HD) membranes are unable to safely remove protein-bound uremic toxins (PBUTs), especially those bonded to human serum albumin (HSA). To overcome this issue, the prior administration of high doses of HSA competitive binders, such as ibuprofen (IBF), has been proposed as a complementary clinical protocol to increase HD efficiency. In this work, we designed and prepared novel hybrid membranes conjugated with IBF, thus avoiding its administration to end-stage renal disease (ESRD) patients. Two novel silicon precursors containing IBF were synthesized and, by the combination of a sol-gel reaction and the phase inversion technique, four monophasic hybrid integral asymmetric cellulose acetate/silica/IBF membranes in which silicon precursors are covalently bonded to the cellulose acetate polymer were produced. To prove IBF incorporation, methyl red dye was used as a model, thus allowing simple visual color control of the membrane fabrication and stability. These smart membranes may display a competitive behavior towards HSA, allowing the local displacement of PBUTs in future hemodialyzers.

Posted ContentDOI
15 May 2023
TL;DR: Cloux et al. as discussed by the authors used a validated Lagrangian model to track floating microplastics coming from potential sources, such as rivers, land-based points, and maritime traffic.
Abstract: Plastic debris in the oceans is a major environmental concern that knows no borders. This material, which is the result of poor waste management on land, is spread across the ocean over long distances and over long periods of time. When macroplastics (those larger than 5 mm) break down through mechanical and chemical processes, they become microplastics. It is difficult to estimate the extent and predict the behavior of these tiny pieces of plastic, but their presence in a variety of organisms, including mollusks and humans, has raised concerns about the potential consequences, which are not yet fully understood but are believed to be significant. To address this problem at a global scale, it is important to identify and quantify the sources of plastic waste that end up in the ocean. We use a validated Lagrangian model (Cloux et al., 2022) to track floating particles coming from potential sources. We studied three types of sources along the Atlantic coast of Spain: rivers, land-based points, and maritime traffic. Over a 7-year period, we analyzed the concentrations of these plastics in the open seas. Our results showed that a significant contribution comes from these sources, both at short and medium distances from the coast. If we consider the fact that some of the simulated particles get washed up on the shore, the concentration of particles near the coast is even higher in certain locations and the concentrations at medium distances are reduced.  Considering semi-enclosed areas, the influence of seasonality was studied for the Bay of Biscay, the Gulf of Cadiz and the Alboran Sea.  The presence of particles in each zone varies between warm and cold seasons, depending on the dynamics of the zone.  The results of this study are under review processes (Cloux et al., n.d.).   Cloux, S., Allen-Perkins, S., de Pablo, H., Garaboa-Paz, D., Montero, P., & Pérez-Muñuzuria, V. (2022). Validation of a Lagrangian model for large-scale macroplastic tracer transport using mussel-peg in NW Spain (Ría de Arousa). Science of The Total Environment, 822, 153338. https://doi.org/10.1016/j.scitotenv.2022.153338 Cloux, S., Pérez-Pérez, P., de Pablo, H., & Pérez-Muñuzuri, V. (n.d.). A Regional Lagrangian Model to Evaluate the Dispersion of Floating Macroplastics in the North Atlantic Ocean from Different Types of Sources in the Iberian Peninsula. Available at SSRN 4306128.