Showing papers in "Journal of physics in 2023"
TL;DR: ROOT/TMVA, a new system that takes trained ONNX deep learning models and emits C++ code that can be easily included and invoked for fast inference of the model, with minimal dependency is reported.
Abstract: We report the latest development in ROOT/TMVA, a new tool that takes trained ONNX deep learning models and emits C++ code that can be easily included and invoked for fast inference of the model, with minimal dependency. An introduction to SOFIE (System for Optimized Fast Inference code Emit) is presented, with examples of interface and generated code. We discuss the latest expanded support of a variety of neural network operators, including convolutional and recurrent layers, as well as the integration with RDataFrame. We demonstrate the latest performance of this framework with a set of benchmarks.
5 citations
TL;DR: In this article , a fusion target is constructed and integrated into a unified digital model of a multi-domain system; simulate and iteratively modify the physical model, and use edge computing technologies for information modeling.
Abstract: The physical design of the fusion target is an important part of controlled thermonuclear fusion, and the geometric model and material selection of the target is also critical to achieving fusion ignition. We have modularised the target and introduced digital modeling, edge computing, and deep learning technologies to build a data-driven hybrid computing framework. We construct physical models and integrate them into a unified digital model of a multi-domain system; simulate and iteratively modify the physical model, and use edge computing technologies for information modeling. Edge computing is well applied to the calculation of each module of the target. Each module is both correlated and independent, and the values of the fusion ignition temperature and density achieved in the target are obtained, and the neutron products in the ignition and main fuel regions are 1016 - 1017 and 1019 respectively. This will be an important reference value for the design of actual fusion targets.
4 citations
TL;DR: In this article , the authors established the mathematical model Vehicle Routing Problem with Simultaneous Pickup and Delivery (VRPSPD) on 3 kg of LPG gas distribution and its solution using the method of Clarke and Wright savings.
Abstract: This study aims to establish the mathematical model Vehicle Routing Problem with Simultaneous Pickup and Delivery (VRPSPD) on 3 kg of LPG gas distribution and its solution using the method of Clarke and Wright Savings. Data used include a list of areas of consumers, service delivery company, the amount of consumer demand, vehicle type and vehicle capacity. The data is then processed to be modeled as hereinafter Vehicle Routing Problem with Simultaneous Pickup Delivery (VRPSPD) problems solved by the method of Clarke and Wright Savings[1]. Based on the calculation, the result for the total mileage of the vehicle is 160 km while the total mileage of the vehicle this company is 201km. Thus Clarke and Wright savings algorithm capable of providing mileage savings with a percentage of 20.03%.
3 citations
TL;DR: In this paper , the authors proposed a non-real-time online working mode that is different from the traditional real-time on-line working mode, and the comparison and analysis between the two working modes were performed in terms of the timeliness of event active reporting, communication flow, the impact on the processing mechanism of the master station system and the impact of the working duration of the power supply.
Abstract: Targeting the situation that the wireless communication terminal commonly exists in the pilot project of distribution automation has an unstable power supply under extreme conditions, this paper proposes a non-real-time online working mode that is different from the traditional real-time online working mode. The switching method of the two working modes was designed, and the comparison and analysis between the two working modes were performed in terms of the timeliness of event active reporting, communication flow, the impact on the processing mechanism of the master station system and the impact on the working duration of the power supply. The results show that the average power consumption of the wireless communication terminal under the non-real-time working mode is reduced from the watt level to the mill watt level, thus the power supply problem of the wireless communication terminal under extreme conditions is solved.
2 citations
TL;DR: The 26th edition of the Symposium has been held in Pisa on September 28th - 30th, 2022 in presence at “Le Benedettine” conference center as mentioned in this paper .
Abstract: Pisa, September 28th – 30th, 2022 Editorial preface Nowadays, turbomachinery plays a key role in both transportation and power generation. Although the power generation is shifting towards the exploitation of an increasing share of renewable energy sources, large industrial turbomachines, especially gas turbines, will remain indispensable for a viable energy mix in the next future. Turbomachinery is continuously evolving to keep pace with the new challenges of the energy transition and, in particular, with the request of higher efficiencies and increased sustainability. To meet these new goals a wide range of research activities are being performed. Even though modeling and computational fluid dynamic are powerful tools that can provide useful support to the research, experimental activities still are at the base of many research programs and are of utmost importance to guarantee a proper insight of the phenomena taking place in turbomachinery and estimate their performance. Since its initiation (1969), this symposium has provided a forum for researchers from universities, research institutes and industry to get together to discuss problems and share experiences involved in making measurements in turbomachines. The symposium covers the development of measurement techniques for the study of aerothermal phenomena in components such as cascades, compressors, turbines, engines, and power plants. Main topics are (but not limited to): • New measurement techniques • New probes and devices • New or advanced test rigs • New techniques for monitoring engine operation • New methods for experimental data analysis The 26th edition of the symposium has been held in Pisa on September 28th - 30th, 2022 in presence at “Le Benedettine” conference center. This collection includes the papers presented at the conference. List of the Editors, Editorial board, Organizing committee, Senior Scientific and Advising Committee, Local organizing committee are available in this pdf.
2 citations
TL;DR: In this paper , the authors used the AIC (Akaike Information Criterion), as well as BIC (Bayesian information criterion) to balance the complexity and the accuracy of the model reasonably, using appropriate indicators AIC and BIC to make the judgments.
Abstract: ARIMA model forecasting algorithm is a commonly used time series forecasting algorithm, this paper first obtains a stable sequence through differential operation, and then obtains a stable sequence from the AR model, as the MA model, and even the ARIMA model. Select the appropriate model for prediction and use it for adaptive mode model design. In the field of machine learning, the complexity of the model is likely to increase, while the accuracy of the model improves, and the models with a complex structure usually cause the following overfitting problem. In order to balance the complexity and the accuracy of the model reasonably, using appropriate indicators AIC (Akaike Information Criterion), as well as BIC (Bayesian information criterion), to make the judgments, which is achieved by eliciting penalty terms in the paper, and the established ARIMA (1,1,2) model meets the requirements.
2 citations
TL;DR: In this paper , it was shown that all particles near a black hole share the same symmetry and conservation of this symmetry may completely remove the information paradox: the quantum black hole has no interior, or equivalently, the black hole interior is a quantum clone of the exterior region.
Abstract: To apply the laws of General Relativity to quantum black holes, one first needs to remove the horizon singularity by means of Kruskal-Szekeres coordinates. This however doubles spacetime, which thereby is equipped with an exact binary symmetry. All particles near a black hole share the same symmetry, and conservation of this symmetry may completely remove the information paradox: the quantum black hole has no interior, or equivalently, the black hole interior is a quantum clone of the exterior region. These observations, totally overlooked in most of the literature on quantum black holes, resolve some issues concerning conservation of information. Some other problems do remain.
2 citations
TL;DR: In this paper , a metric mutation anomaly detection method based on the combination of machine learning and statistical algorithm was proposed for power grid dispatching automatic system, which can quickly locate the anomaly components of the system and assist the system maintenance personnel in making decisions.
Abstract: With the increase of the scale, function and complexity of power grid dispatching automatic system, the difficulty of system operation and maintenance increases significantly. This paper presents a metric mutation anomaly detection method based on the combination of machine learning and statistical algorithm. First, the system data is smoothed by machine learning algorithm, and the difference between the smoothed value and the real value is calculated. Then, the difference was used to detect data anomaly by ensemble of statistical method. The effectiveness and accuracy of the method is verified by the test data of the power grid dispatching automatic system. The method can quickly locate the anomaly components of the system and assist the system maintenance personnel in making decisions.
2 citations
TL;DR: In this paper , the authors combine Rabin-p cryptosystem and Spritz algorithm in a hybrid cryptographic scheme, where the encrypted message can be recovered back to the original message while maintaining small decryption time.
Abstract: Security in the process of sending digital messages is very important to avoid being theft by unwanted parties. In order to protect the messages, modern cryptography techniques come into play. Public key cryptosystem based on the integer factorization such as Rabin-p could provide high confidentiality as long as the modulus is of a very large integer. On the contrary, having a longer modulus causes two problems: the longer the size of the ciphertext and the longer the time needed for the decryption process. In order to solve this problem, in this research, we combine Rabin-p cryptosystem and Spritz algorithm in a hybrid cryptosystem. Message is encrypted with Spritz algorithm, and the key of Spritz algorithm is then protected by Rabin-p cryptosystem. In this scheme, the encrypted message can be recovered back to the original message while maintaining small decryption time.
2 citations
TL;DR: The performance portability libraries allow to write code once and run it on different architectures with close-to-native performance as mentioned in this paper , which is not sustainable in terms of maintainability and testability of the software.
Abstract: For CMS, Heterogeneous Computing is a powerful tool to face the computational challenges posed by the upgrades of the LHC, and will be used in production at the High Level Trigger during Run 3. In principle, to offload the computational work on non-CPU resources, while retaining their performance, different implementations of the same code are required. This would introduce code-duplication which is not sustainable in terms of maintainability and testability of the software. Performance portability libraries allow to write code once and run it on different architectures with close-to-native performance. The CMS experiment is evaluating performance portability libraries for the near term future.
2 citations
TL;DR: In this article , the authors present an important milestone for the CMS High Granularity Calorimeter (HGCAL) event reconstruction: the deployment of the GPU clustering algorithm (CLUE) to the CMS software.
Abstract: We present an important milestone for the CMS High Granularity Calorimeter (HGCAL) event reconstruction: the deployment of the GPU clustering algorithm (CLUE) to the CMS software. The connection between GPU CLUE and the preceding GPU calibration step is thus made possible, further extending the heterogeneous chain of HGCAL’s reconstruction framework. In addition to improvements brought by CLUE’s deployment, new recursive device kernels are added to efficiently calculate the position and energy of CLUE clusters. Data conversions between GPU and CPU are included to facilitate the validation of the algorithms and increase the flexibility of the reconstruction. For the first time in HGCAL, conditions data are deployed to the GPU and made available on demand at any stage of the heterogeneous reconstruction. This is achieved via a new geometry ordering scheme in which physical and memory locations are connected. This scheme is successfully tested with the GPU CLUE version reported here, and is expected to have a broad range of applicability for future heterogeneous developments in CMS. Finally, the performance of the combined calibration and clustering algorithms on GPU is assessed and compared to its CPU counterpart.
TL;DR: Li et al. as discussed by the authors used CNN to identify and classify different types of car paint defects, such as bubble, dust, fouling, pinhole, sagging, scratch, and shrink.
Abstract: In the study of using images to display car paint defects, the current need is to use deep Convolutional Neural Networks (CNN) to identify and classify different types of car paint defects, so as to give full play to the application of image processing in the field of automatic car paint defect detection. Using the collected car paint defect images, the car paint defects dataset is established. The preprocessing process of original data and the application of three image classification models based on CNN are visually displayed. First, the dataset of 7 types of car body defects including bubble, dust, fouling, pinhole, sagging, scratch, and shrink has been established, with a total of 2468 images. The model of MobileNet-V2, Vgg16, and ResNet34 are selected for training. As a result, after 30 training iterations, the MobileNetV2 algorithm achieved 94.3% accuracy, the accuracy of the Vgg16 algorithm is as high as 99.9%, and the accuracy of the ResNet34 algorithm is maintained at 99.2%. To sum up, for car paint defect detection, deep learning has great potential and deserves further development.
TL;DR: In this article , the authors developed a simulation tool of the mileage covered by VIPV, which takes into account various use profiles and different characteristics of the vehicles and of the PV system.
Abstract: In order to improve primary energy saving and reduce greenhouse emissions, vehicle integrated photovoltaics have an ongoing interest. Studies on the benefits from vehicle solar roof, which take into account all the losses and the monthly variation in different climate conditions, are required. Therefore, we developed a simulation tool of the mileage covered by VIPV. This tool takes into account various use profiles and different characteristics of the vehicles and of the PV system. Focusing on city bus, simulations show that many parameters influence the outputs of the model, mainly: the geographic location, the shading losses, the electric architecture and the battery saturation. With projections of the technology in 2030, VIPV cover up to 9739 km annual mileage. This represents up to 24 % of the total distance. For the best month, it can get up to 47 km/day. For average Europe case, with 30 % shading losses, the VIPV cover only 3711 km annual mileage. The upgrade of the technology from 2022 to 2030 improves the benefits of VIPV by approximately 34 %. Life cycle assessment of solar city bus shows neutral to high gains. The carbon footprint is up to 28 T CO2-equivalent avoided emissions on 20 years lifespan.
TL;DR: In this paper , the authors presented a novel approach to divide the level of renewable energy sources grid-connected with the aim to counteract the volatility and alleviate its effect of it, and defined two specific stages with heterogeneous dispatching strategy, the first stage aiming at minimizing RESs power limit while the second stage taking the secure operation of the power grid as the optimal target by adjusting the optimization objectives of intraday rolling and real-time dispatching plan.
Abstract: With the proliferation of the penetration rate of renewable energy sources (RESs), more flexibility should be released to cope with the demand caused by the uncertainty of RESs output. Thus, it is paramount to divide different stages of the RESs penetration rate from the perspective of dispatching to represent the flexibility supply-demand relationship. Based on these considerations, this paper presents a novel approach to divide the level of RESs grid-connected with the aim to counteract the volatility and alleviate its effect of it. Herein, we define two specific stages with heterogeneous dispatching strategy, the first stage aiming at minimizing RESs power limit while the second stage takes the secure operation of the power grid as the optimal target by adjusting the optimization objectives of intraday rolling and real-time dispatching plan. Simulation results demonstrate the feasibility of the partition method.
TL;DR: In this article , the authors extend Gaussian processes to include nontrivial features in the speed of sound, such as bumps, kinks, and plateaus, which are predicted by nuclear models with exotic degrees of freedom.
Abstract: Gaussian processes provide a promising framework by which to extrapolate the equation of state (EoS) of cold, catalyzed matter beyond 1 – 2 times nuclear saturation density. Here we discuss how to extend Gaussian processes to include nontrivial features in the speed of sound, such as bumps, kinks, and plateaus, which are predicted by nuclear models with exotic degrees of freedom. Using a fully Bayesian analysis incorporating measurements from X-ray sources, gravitational wave observations, and perturbative QCD results, we show that these features are compatible with current constraints and report on how the features affect the EoS posteriors.
TL;DR: In this paper , the authors discuss the extension of the Goldstone and Englert-Brout-Higgs mechanisms to non-Hermitian Hamiltonians that possess an antilinear PT symmetry.
Abstract: Abstract We discuss the extension of the Goldstone and Englert-Brout-Higgs mechanisms to non-Hermitian Hamiltonians that possess an antilinear PT symmetry. We study a model due to Alexandre, Ellis, Millington and Seynaeve and show that for the spontaneous breakdown of a continuous global symmetry we obtain a massless Goldstone boson in all three of the antilinear symmetry realizations: eigenvalues real, eigenvalues in complex conjugate pairs, and eigenvalues real but eigenvectors incomplete. In this last case we show that it is possible for the Goldstone boson mode to be a zero-norm state. For the breakdown of a continuous local symmetry the gauge boson acquires a non-zero mass by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry, except the one where the Goldstone boson itself has zero norm, in which case, and despite the fact that the continuous local symmetry has been spontaneously broken, the gauge boson remains massless.
TL;DR: In this paper , the performance of nickel foam electrodes in 1.0 M KOH after being treated in various concentrations of hydrochloric acid and sulphuric acid was investigated, and the greatest performance was achieved using 0.50 M H2SO4 as measured by LSV, EIS and CV and ECSA.
Abstract: Water electrolysers are multi-component systems whose performance relies on each part performing its task. A great emphasis has been placed on the development of efficient catalyst-coated electrodes, however the efficacy of the underlying substrate itself has been overlooked. This paper investigates the resulting performance of nickel foam electrodes in 1.0 M KOH after being treated in various concentrations of hydrochloric acid and sulphuric acid. The greatest performance was achieved utilising 0.50 M H2SO4 as measured by LSV, EIS and CV and ECSA, resulting in a 27% decline in series resistance relative to untreated nickel foam. The series resistance decreased continuously with acid concentration until a plateau was reached at the concentration of 0.5 M, where this trend was seen for both types of acid. Utilising these preparation methods for nickel foam electrodes can notably enhance electrode performance.
TL;DR: In this paper , the control system of lower limb exoskeletons for rehabilitation is discussed, based on the public literature in recent years, it is summarized from three aspects, i.e., movement mode switching, human gait recognition and human exoskeleton interaction control.
Abstract: The research on exoskeleton robots has been widely carried out for many years around the world, especially the development of new-style lower limb exoskeletons for rehabilitation and assistance is one of the key research directions. The focus on the control system of lower limb exoskeletons for rehabilitation is discussed. Based on the public literature in recent years, it is summarized from three aspects, i.e., movement mode switching, human gait recognition and human-exoskeleton interaction control. Finally, the technical issues of the current lower limb rehabilitation exoskeleton control strategy are discussed. The future development prospects and research directions of the lower limb rehabilitation exoskeleton are prospected, and some suggestions on how to achieve a more efficient and accurate control are given.
TL;DR: The NUSES satellite as discussed by the authors is a technological pathfinder for the development and test of innovative technologies and observational strategies for future missions aiming at investigating cosmic radiation, astrophysical neutrinos, the Sun-Earth environment, space weather and magnetosphere-ionosphere-lithosphere coupling.
Abstract: NUSES is a space mission promoted by the Gran Sasso Science Institute (GSSI) in collaboration with Thales Alenia Space and the Italian National Institute for Nuclear Physics (INFN). NUSES will be a technological pathfinder for the development and test of innovative technologies and observational strategies for future missions aiming at investigating cosmic radiation, astrophysical neutrinos, the Sun-Earth environment, space weather and magnetosphere-ionosphere-lithosphere coupling (MILC). The NUSES satellite will host two payloads, TERZINA and ZIRÉ. The first one, TERZINA, consists of a compact optical instrument equipped with a Cherenkov telescope based on the use of state-of-art of Silicon Photomultipliers (SiPMs). TERZINA will characterize the Cherenkov signature of high energy proton-induced background and therefore it will be instrumental for future missions for the detection of astrophysical earth-skimming neutrinos. The second payload, ZIRÉ, will be tailored to provide high precision measurements of the flux intensity of electrons, protons and light Cosmic Ray nuclei up to hundreds of MeV and of gamma-rays in the 0 . 1 MeV - 10 MeV energy range. ZIRÉ will be also capable of pinpointing possible MILC events by measuring the charged particle flux. In this paper we report the overall description of the instruments onboard the NUSES satellite along with the scientific and technological objectives of the mission.
TL;DR: In this paper , an intelligent optimal control strategy for heat pump system (HPS) based on digital twin technology is proposed, which can effectively improve energy efficiency utilization and has a better control effect, providing technical support for the operation and management of HPS.
Abstract: In the context of the digital transformation of the power grid, digital twins, as one of the key technologies to promote the digital and intelligent development of the power industry, are still in the theoretical research stage. Focusing on the application of digital twin technology in the power industry, an intelligent optimal control strategy for heat pump system (HPS) based on digital twin technology is proposed. Firstly, the DOE-2 model of HPS and the prediction model of building heat load are established. Secondly, based on the real-time monitoring data, the improved quantum particle swarm optimization algorithm is used for the rolling optimization of model parameters. Finally, the optimal control strategy of the HPS is formulated based on the comprehensive consideration of energy consumption, the number of startups and shutdowns, and the temperature control requirements of the HPS. The example results show that the digital twin model has high accuracy. Compared with the manual control method, the proposed optimal control strategy can effectively improve energy efficiency utilization and has a better control effect, providing technical support for the operation and management of HPS.
TL;DR: The FEYNCALC 10.0 package as mentioned in this paper is an open-source MATHEMATICA package that is relevant for multiloop calculations, such as topology identification by means of the Pak algorithm, search for equivalent master integrals and their graph representations as well as automatic derivations of Feynman parametric representations for a wide class of multi-linear integrals.
Abstract: Abstract We report on the new functionality of the open-source MATHEMATICA package FEYNCALC relevant for multiloop calculations. In particular, we focus on such tasks as topology identification by means of the Pak algorithm, search for equivalent master integrals and their graph representations as well as automatic derivations of Feynman parametric representations for a wide class of multiloop integrals. The functions described in this report are expected to be finalized with the official release of FEYNCALC 10. The current development snapshot of the package including the documentation is publicly available on the project homepage. User feedback is highly encouraged.
TL;DR: In this article , the authors categorize image data augmentation algorithms into three kinds from the perspective of algorithm strategy, and they are matrix transformation algorithm, feature expansion algorithm and model generation based on neural network algorithm.
Abstract: Image data-augmentation algorithms effectively trump the problem of insufficient training samples for deep learning in some application fields, and it is typically for scholars to choose some of them for various computer vision tasks. But as the algorithms develop rapidly, the early proposed classification that the data-augmentation algorithms are sorted into classical ways and generating methods is no more suitable, because such classification misses some other meaningful strategies. Besides, it is frustrating for someone to decide which is the exact method to undertake, though there are too many optional algorithms to choose. Towards the goal of making some suggestions, the paper categorizes image data-augmentation algorithms into three kinds from the perspective of algorithm strategy, and they are matrix transformation algorithm, feature expansion algorithm and model generation based on neural network algorithm. The paper analyzes the typical algorithm principle, performance, application scenarios, research status and future challenges, and forecasts the development trend of data augmentation algorithms. The paper can provide academic reference for data augmentation algorithm in the fields of medicine and military.
TL;DR: In this paper , the parity-odd contribution to the trace anomaly of a chiral fermion was discussed in terms of Feynman diagrams, and the results obtained using dimensional regularization and the Breitenlohner-Maison prescription were compared with other approaches.
Abstract: We review recent discussions regarding the parity-odd contribution to the trace anomaly of a chiral fermion. We pay special attention to the perturbative approach in terms of Feynman diagrams, comparing in detail the results obtained using dimensional regularization and the Breitenlohner–Maison prescription with other approaches.
TL;DR: In this article , the authors used conditional Invertible Neural Networks (cINNs) to learn posterior distributions from which the most likely electromagnetic field given a measured signal trace can be inferred, and extended the method with an autoencoder by reducing the parameter phase space and decoupling the cINN from specific data shapes.
Abstract: The reconstruction of cosmic ray-induced air showers from measurements of radio waves constitutes a major challenge. In this work, we focus on recovering the full three-dimensional electromagnetic field from two recorded signal traces of an antenna station covering two horizontal polarization directions. The simulated field is folded by a direction and frequency-dependent characteristic antenna response pattern, resulting in voltage signal traces as a function of time. Both signal traces are contaminated by simulated background noise. We use conditional Invertible Neural Networks (cINNs) to learn posterior distributions, from which the most likely electromagnetic field given a measured signal trace can be inferred. To improve robustness, we extend the method with an autoencoder by reducing the parameter phase space and decoupling the cINN from specific data shapes. Thereby, each signal trace is condensed into a small number of abstract parameters in the latent space on which the cINN operates. The presented method shows promising results and can be transferred to other unfolding problems where the recovery of the pre-measurement state is of interest.
TL;DR: In this article , a modulation recognition network for aquatic communication signals based on deep learning is proposed, which can capture the characteristics of deep features and combines ResNet with a modality recognition network.
Abstract: Marine information technology plays a important role in the development of marine resources development, marine climate early warning, and other industries. Underwater acoustic communication technology can help us better access marine information. The performance of the underwater acoustic signal modulation recognition algorithm depends on the accuracy of feature extraction. however, due to excessive underwater noise, many traditional algorithms can not recognize features well. For this reason, this paper proposes a modulation recognition network for aquatic communication signals based on deep learning. ResNet can capture the characteristics of deep features and combines ResNet with a modulation recognition network. Finally, the experiment proves the effectiveness of this method.
TL;DR: In this paper , a femtosecond laser system was used to process β-TCP pellets surface, which results in surface morphology modification, by turning the flat mirror polished surface into a rough and opaque one.
Abstract: Tricalcium phosphate (Ca3(PO4)2, TCP), is one of the most studied and used as material for bioresorbable implants. The β phase has a slower dissolution dynamic and ensures mechanical support for a longer time in biological environment, while a faster release of ions characterize the α phase that trigger a stronger biological response. In this work a femtosecond laser system was used to process β-TCP pellets surface. The femtosecond laser processing results in surface morphology modification, by turning the flat mirror polished surface into a rough and opaque one. The morphological and phisycochemical characteristics of material surface were studied by means of SEM, AFM, Raman, XRD and contact angle measurement. The processed surface showed the formation of micro and nano roughness alongside, furthermore a partial phase transformation from β-TCP to α-TCP was detected. A significant improvement in surface wettability for three different liquids (i.e.water, ethylene glycol and diiodo-methane) is reported. This implies an increase in surface free energy as well. The combination of α and β phase, together with the increased roughness obtained by laser processing, could positively affect the cell adhesion and metabolic activity.
TL;DR: RooFit as mentioned in this paper is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments.
Abstract: RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analysis becomes more computationally demanding. Therefore, RooFit development in recent years is focused on modernizing RooFit, improving its ease of use, and on performance optimization. This paper presents the new RooFit vectorized computation mode, which supports calculations on the GPU. Additionally, we discuss new features in the upcoming ROOT 6.26 release, highlighting the new pythonizations in particular.
TL;DR: In this article , the authors proposed a noise rate estimation method and proved that by adopting importance reweighting, the accuracy of classification with label noise problem can rise approximately 10% through any surrogate loss function.
Abstract: In a dataset, the misidentified labels can be assumed as the true labels flipped with a probability. In this paper, we study a general situation in which sample labels are corrupted at random. We propose a noise rate estimation method and prove that by adopting importance reweighting, the accuracy of classification with label noise problem can rise approximately 10% through any surrogate loss function. The two classification methods we choose for robustness analysis are convolutional neural network and convolutional neural network with importance reweighting. The details of these two methods are fully illustrated in this paper. We discuss the label noise problems and solutions in the introduction part and explain how the importance reweighting method and the noise rate estimation method are combined to deal with this problem. Experiments on Fashion-MNIST0.5, Fashion-MNIST0.6, and CIFAR with noise verify our approach. In the end, we also provide the transition matrix of the flip rate for each dataset.
TL;DR: In this paper , the authors investigate the viability of retardation theory, an alternative to the Dark Matter paradigm (DM), which does not seek to modify the General Principal of Relativity but to improve solutions within it by exploring its weak field approximation to solve the missing mass problem in a galactic context.
Abstract: The missing mass problem has been with us since the 1970s, as Newtonian gravity using baryonic mass cannot account for various observations. We investigate the viability of retardation theory, an alternative to the Dark Matter paradigm (DM) which does not seek to modify the General Principal of Relativity but to improve solutions within it by exploring its weak field approximation to solve the said problem in a galactic context. This work presents eleven rotation curves calculated using Retardation Theory. The calculated rotation curves are compared with observed rotation curves and with those calculated using MOND. Values for the change in mass flux to mass ratio are extracted from the fitting process as a free fitting parameter. Those quantities are interpreted here and in previous works using given galactic processes. Retardation Theory was able to successfully reproduce rotation curves and a preliminary correlation with star birthrate index is seen, suggesting a possible link between galactic winds and observed rotation curves. Retardation Theory shows promising results within current observations. More research is needed to elucidate the suggested mechanism and the processes which contribute to it. Galactic mass outflows carried by galactic winds may affect rotation curves.
TL;DR: The spin-charge-family theory has been used for the description of the internal spaces of fermion (Clifford odd) and boson fields as mentioned in this paper , which is a new understanding of the second quantization postulates for the fermions and Boson fields: the "basis vectors" determined by the Clifford odd objects demonstrate all the properties of the inner space of Fermions, and transfer their anticommutativity to their creation and annihilation operators.
Abstract: In a long series of works the author has demonstrated that the model named the spin-charge-family theory offers the explanation for all in the standard model assumed properties of the fermion and boson fields, as well as for many of their so far observed properties if the space-time is ≥ (13 + 1) while fermions interact with gravity only. In this talk, I briefly report on the so far achievements of the theory. The main contribution demonstrates the offer of the Clifford odd and even objects for the description of the internal spaces of fermion (Clifford odd) and boson (Clifford even) fields, which is opening up a new understanding of the second quantization postulates for the fermion and boson fields: The “basis vectors” determined by the Clifford odd objects demonstrate all the properties of the internal space of fermions and transfer their anti-commutativity to their creation and annihilation operators, while the “basis vectors” determined by the Clifford even objects demonstrate all the properties of the internal space of boson fields and transfer their commutativity to their creation and annihilation operators. The toy model with d = (5 + 1) illustrates the statements.