scispace - formally typeset
Search or ask a question

Showing papers by "Philips published in 2022"


Journal ArticleDOI
TL;DR: In this article, a low-rank motion-corrected (LRMC) reconstruction for non-rigid motioncorrected MR fingerprinting (MRF) is proposed. But the proposed approach is limited to 2D myocardial and 3D liver MRF.
Abstract: PURPOSE Develop a novel low-rank motion-corrected (LRMC) reconstruction for nonrigid motion-corrected MR fingerprinting (MRF). METHODS Generalized motion-corrected (MC) reconstructions have been developed for steady-state imaging. Here we extend this framework to enable nonrigid MC for transient imaging applications with varying contrast, such as MRF. This is achieved by integrating low-rank dictionary-based compression into the generalized MC model to reconstruct MC singular images, reducing motion artifacts in the resulting parametric maps. The proposed LRMC reconstruction was applied for cardiac motion correction in 2D myocardial MRF (T1 and T2 ) with extended cardiac acquisition window (~450 ms) and for respiratory MC in free-breathing 3D myocardial and 3D liver MRF. Experiments were performed in phantom and 22 healthy subjects. The proposed approach was compared with reference spin echo (phantom) and with 2D electrocardiogram-triggered/breath-hold MOLLI and T2 gradient-and-spin echo conventional maps (in vivo 2D and 3D myocardial MRF). RESULTS Phantom results were in general agreement with reference spin-echo measurements, presenting relative errors of approximately 5.4% and 5.5% for T1 and short T2 (<100 ms), respectively. The proposed LRMC MRF reduced residual blurring artifacts with respect to no MC for cardiac or respiratory motion in all cases (2D and 3D myocardial, 3D abdominal). In 2D myocardial MRF, left-ventricle T1 values were 1150 ± 41 ms for LRMC MRF and 1010 ± 56 ms for MOLLI; T2 values were 43.8 ± 2.3 ms for LRMC MRF and 49.5 ± 4.5 ms for T2 gradient and spin echo. Corresponding measurements for 3D myocardial MRF were 1085 ± 30 ms and 1062 ± 29 ms for T1 , and 43.5 ± 1.9 ms and 51.7 ± 1.7 ms for T2 . For 3D liver, LRMC MRF measured liver T1 at 565 ± 44 ms and liver T2 at 35.4 ± 2.4 ms. CONCLUSION The proposed LRMC reconstruction enabled generalized (nonrigid) MC for 2D and 3D MRF, both for cardiac and respiratory motion. The proposed approach reduced motion artifacts in the MRF maps with respect to no motion compensation and achieved good agreement with reference measurements.

15 citations


Book ChapterDOI
01 Jan 2022
TL;DR: In this paper, the electrical conductivity of polymer/graphene composites prepared by different techniques has been investigated and the electrical percolation threshold and the ultimate conductivity values of these graphene-based polymer composites are reported and discussed elaborately.
Abstract: In this chapter, the focus has been put on the electrical conductivity of polymer/graphene composites prepared by different techniques. Initially, we discussed the electrical conductivity of different types of graphenes like pristine graphene, graphene oxide (GO), reduced GO, chemically vapor deposited graphene, and liquid exfoliated graphene. Thermally exfoliated pristine graphene showed the highest electrical conductivity; whereas, highly functionalized GO exhibited the lowest conductivity. The electrical percolation threshold and the ultimate conductivity values of these graphene-based polymer composites are reported and discussed elaborately. Several dependent phenomena of electrical conductivity like types of polymer and graphene, loading of graphene, processing techniques, alignment of graphene etc. are presented here in details. Finally, possible applications of these conductive polymer/graphene composites like sensors, electromagnetic interference shielding are mentioned within the text.

9 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a preconditioned water-fat total field inversion (wfTFI) algorithm that directly estimates the susceptibility map from complex multi-echo gradient echo data for waterfat regions.
Abstract: Purpose To (a) develop a preconditioned water-fat total field inversion (wfTFI) algorithm that directly estimates the susceptibility map from complex multi-echo gradient echo data for water-fat regions and to (b) evaluate the performance of the proposed wfTFI quantitative susceptibility mapping (QSM) method in comparison with a local field inversion (LFI) method and a linear total field inversion (TFI) method in the spine. Methods Numerical simulations and in vivo spine multi-echo gradient echo measurements were performed to compare wfTFI to an algorithm based on disjoint background field removal (BFR) and LFI and to a formerly proposed TFI algorithm. The data from 1 healthy volunteer and 10 patients with metastatic bone disease were included in the analysis. Clinical routine computed tomography (CT) images were used as a reference standard to distinguish osteoblastic from osteolytic changes. The ability of the QSM methods to distinguish osteoblastic from osteolytic changes was evaluated. Results The proposed wfTFI method was able to decrease the normalized root mean square error compared to the LFI and TFI methods in the simulation. The in vivo wfTFI susceptibility maps showed reduced BFR artifacts, noise amplification, and streaking artifacts compared to the LFI and TFI maps. wfTFI provided a significantly higher diagnostic confidence in differentiating osteolytic and osteoblastic lesions in the spine compared to the LFI method (p = .012). Conclusion The proposed wfTFI method can minimize BFR artifacts, noise amplification, and streaking artifacts in water-fat regions and can thus better differentiate between osteoblastic and osteolytic changes in patients with metastatic disease compared to LFI and the original TFI method.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a cyclic steady-state approach was proposed to characterize magnetization transfer (MT) and inhomogeneous MT contrasts from a single acquisition, producing both semiquantitative contrast ratios and quantitative parameter maps.
Abstract: PURPOSE Magnetization transfer (MT) and inhomogeneous MT (ihMT) contrasts are used in MRI to provide information about macromolecular tissue content. In particular, MT is sensitive to macromolecules, and ihMT appears to be specific to myelinated tissue. This study proposes a technique to characterize MT and ihMT properties from a single acquisition, producing both semiquantitative contrast ratios and quantitative parameter maps. THEORY AND METHODS Building on previous work that uses multiband RF pulses to efficiently generate ihMT contrast, we propose a cyclic steady-state approach that cycles between multiband and single-band pulses to boost the achieved contrast. Resultant time-variable signals are reminiscent of an MR fingerprinting acquisition, except that the signal fluctuations are entirely mediated by MT effects. A dictionary-based low-rank inversion method is used to reconstruct the resulting images and to produce both semiquantitative MT ratio and ihMT ratio maps, as well as quantitative parameter estimates corresponding to an ihMT tissue model. RESULTS Phantom and in vivo brain data acquired at 1.5 Tesla demonstrate the expected contrast trends, with ihMT ratio maps showing contrast more specific to white matter, as has been reported by others. Quantitative estimation of semisolid fraction and dipolar T1 was also possible and yielded measurements consistent with literature values in the brain. CONCLUSION By cycling between multiband and single-band pulses, an entirely MT-mediated fingerprinting method was demonstrated. This proof-of-concept approach can be used to generate semiquantitative maps and quantitatively estimate some macromolecular-specific tissue parameters.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the authors showed that increased light intensity applied shortly before harvest (EOP, End-Of-Production) increases nutritional value i.e. carbohydrates and antioxidants and could improve the chilling tolerance.

6 citations


Book ChapterDOI
01 Jan 2022
TL;DR: A comprehensive review of the literature on holothuria scabra can be found in this article , where the authors present the most complete synthesis to date, including scientific papers and material published by local institutions and/or in foreign languages.
Abstract: Holothuria scabra is one of the most intensively studied holothuroids, or sea cucumbers (Echinodermata: Holothuroidea), having been discussed in the literature since the early 19th century. The species is important for several reasons: (1) it is widely distributed and historically abundant in several shallow soft-bottom habitats throughout the Indo-Pacific, (2) it has a high commercial value on the Asian markets, where it is mainly sold as a dried product (beche-de-mer) and (3) it is the only tropical holothuroid species that can currently be mass-produced in hatcheries. Over 20 years have elapsed since the last comprehensive review on H. scabra published in 2001. Research on H. scabra has continued to accumulate, fuelled by intense commercial exploitation, and further declines in wild stocks over the entire distribution range. This review compiles data from over 950 publications pertaining to the biology, ecology, physiology, biochemical composition, aquaculture, fishery, processing and trade of H. scabra, presenting the most complete synthesis to date, including scientific papers and material published by local institutions and/or in foreign languages. The main goal of this project was to summarize and critically discuss the abundant literature on this species, making it more readily accessible to all stakeholders aiming to conduct fundamental and applied research on H. scabra, or wishing to develop aquaculture, stock enhancement and management programs across its geographic range.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors compare five streaming machine learning algorithms applied to visual defect inspection with real-world data provided by Philips Consumer Lifestyle BV. They show that active learning reduces the data labeling effort by almost 15% on average for the worst-case while keeping an acceptable classification performance.
Abstract: Quality control is a crucial activity performed by manufacturing companies to verify product conformance to the requirements and specifications. Standardized quality control ensures that all the products are evaluated under the same criteria. The decreased cost of sensors and connectivity enabled an increasing digitalization of manufacturing and provided greater data availability. Such data availability has spurred the development of artificial intelligence models, which allow higher degrees of automation and reduced bias when inspecting the products. Furthermore, the increased inspection speed reduces overall costs and time required for defect inspection. In this research, we compare five streaming machine learning algorithms applied to visual defect inspection with real-world data provided by Philips Consumer Lifestyle BV. Furthermore, we compare them in a streaming active learning context, which reduces the data labeling effort in a real-world context. Our results show that active learning reduces the data labeling effort by almost 15% on average for the worst-case while keeping an acceptable classification performance. The use of machine learning models for automated visual inspection is expected to speed up the quality inspection up to 40%.

4 citations


Book ChapterDOI
01 Jan 2022
TL;DR: Graphene has been considered as a promising candidate for a wide range of industrial applications like structural and electronic components, batteries/capacitor, adsorbents, catalyst support, thermal transport media, and even application in biotechnology.
Abstract: Recently, academic as well as industrial attention has turned to structural and electronic properties of carbon-based nanomaterials. Among the carbon nanomaterials, now graphene is the hottest topic in condensed-mater physics and material science. Graphene, a two-dimensional carbon nanomaterial was discovered in 2004 by A.K. Geim and K.S. Novoselov. Because of excellent electronic, physical, and thermal properties, non-toxicity, highly chemical and thermal tolerant, graphene has been considered as a promising candidate for a wide range of industrial applications like structural and electronic components, batteries/capacitor, adsorbents, catalyst support, thermal transport media, and even application in biotechnology. It is also a suitable material for energy technologies such as fuel cells, solar cells, hydrogen storage, batteries, and capacitors, filed effect transistor, and transparent electrodes. To synthesize graphene, several approaches have been applied and some approaches also have been utilized to produce graphene in commercial scale. Despite strong interest and rapid progress in research on graphene and graphene related materials, there is still a long way for the widespread implementation of graphene. Primary difficulty is quantity and quality imbalance, but there’s also issues with large-scale production while maintaining high quality, high yield, and controllable tuning of bandgap of graphene. This chapter deals with different methods used for graphene synthesis, advantages and disadvantages of different graphene synthesis techniques, market scenario based on commercial graphene production, and the methods used for commercial production. This chapter also includes different surface functionalization processes of graphene based on different applications.

4 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explored the value of amide proton transfer-weighted (APTw) magnetic resonance imaging (MRI) for differential diagnosis of fibroadenomas and malignant breast tumors.

4 citations


Book ChapterDOI
01 Jan 2022
TL;DR: This chapter revisits two measurement modalities: PPG-based and motion-based, and then two widely used motion- based core algorithms: optical flow and profile correlation, and their performances are evaluated in the context of magnetic resonance applications, e.g., as input signals for respiratory triggering/gating.
Abstract: Camera-based measurement enables contactless respiration monitoring by extracting subtle changes in the sequence of images of a human body. Over the last decade, various motion-based approaches have been proposed to measure the respiratory motion in the chest/abdomen, and some of them have been incorporated into products for video health monitoring. It has also been shown that respiratory effort leaves its marks in the Photoplethysmography (PPG) signal because it leads to a modulation in blood volume changes in the living skin (e.g., in the face). However, there is no thorough benchmark between motion-based and PPG-based approaches nor are there insights into their merits and limitations as needed in system designs for specific applications. In addition, though various motion-based solutions were investigated and implemented, no adequate benchmarking of the core algorithms is available, where the core is defined as the estimation of tiny motion of specific body parts. In this chapter, we first revisit two measurement modalities: PPG-based and motion-based, and then two widely used motion-based core algorithms: optical flow and profile correlation. Their performances are evaluated in the context of magnetic resonance applications, e.g., as input signals for respiratory triggering/gating. The promises and limitations of various modalities and algorithmic approaches are discussed. The insights gained in this chapter are intended to fuel further benchmarking activities for these methods in other application areas, thereby contributing to increased and effective usage of camera-based respiration measurement.

3 citations


Journal ArticleDOI
TL;DR: In this article, the anti-TNF drugs, infliximab and adalimumab, were not detected in fistula samples from any of the Crohn's patients despite detection in'spiked' positive control samples.
Abstract: Introduction Anti-TNF therapy is recommended as treatment for patients with Crohn´s perianal fistulas. However, a significant proportion of patients have a sub-optimal response to anti-TNF therapy. Higher serum levels of anti-TNF agents have been associated with improved outcomes in perianal Crohn's disease. Currently, it is unknown whether anti-TNF agent levels can be detected in tissue from fistula tracts themselves and whether this is associated with response. Aims and methods We undertook a pilot study to measure fistula tissue levels of anti-TNF medication (infliximab and adalimumab). We used a previously validated targeted proteomic technique, employing ultraperformance liquid chromatography-mass spectrometry, to detect/quantify anti-TNF drugs. Biopsies were obtained from fistula tracts of patients with Crohn's disease on maintenance treatment; with idiopathic (cryptoglandular) fistula tissues used as negative controls as well as positive controls (by spiking the latter tissues with anti-TNF drugs). Results Tissue was sampled from the fistula tracts of seven patients with Crohn's perianal disease (five patients were on adalimumab and two patients were on infliximab). The anti-TNF drugs, infliximab and adalimumab, were not detected in fistula samples from any of the Crohn's patients despite detection in 'spiked' positive control samples. Conclusion Absence of detection of the anti-TNF drugs in fistula tissue raises the question on the role of tissue penetrance of anti-TNF drugs in response to therapy. Further work is required in a larger number of patients to validate the findings observed and investigate if any correlation exists between tissue and serum levels of anti-TNF and clinical outcome. Summary Predicting response in Crohn's fistula patients on biologic therapy is difficult with no reliable biomarkers. This pilot study uses targeted proteomics to investigate the potential role of tissue drug levels in acting as a biomarker of treatment response.

Journal ArticleDOI
TL;DR: In this paper, the effect of a combination of compressed sensing and SENSitivity Encoding (SENSE) acceleration techniques on radiation therapy magnetic resonance imaging (MRI) simulation workflows was evaluated.
Abstract: Purpose To assess the effect of a combination of compressed sensing and SENSitivity Encoding (SENSE) acceleration techniques on radiation therapy magnetic resonance imaging (MRI) simulation workflows. Methods and Materials Thirty-seven acquisitions were performed with both SENSE-only (SENSE) and combined compressed sensing and SENSE (CS) techniques in 24 patients receiving radiation therapy MRI simulation for a wide range of disease sites. The anatomic field of view prescription and image resolution were identical for both SENSE and CS acquisitions to ensure fair comparison. The acquisition time of all images was recorded to assess time savings. For each image pair, image quality, and ability to contour were assessed by 2 radiation oncologists. Aside from direct image pair comparisons, the feasibility of using CS to improve MRI simulation protocols by increasing image resolution, field of view, and reducing motion artifacts was also evaluated. Results CS resulted in an average reduction of 27% in scan time with negligible changes in image quality and the ability to contour structures for RT treatment planning compared with SENSE. Physician scoring of image quality and ability to contour shows that while SENSE still has slightly better image quality compared with CS, this observed difference in image quality did not affect the ability to contour. In addition, the higher acceleration capability of CS enabled use of superior-inferior direction phase encoding in a sagittal 3-dimensional T2-weighted scan for substantially improved visibility of the prostatic urethra, which eliminated the need for a Foley catheter in most patients. Conclusions The combination of compressed sensing and parallel imaging resulted in marked improvements in the MRI Simulation workflow. The scan time was reduced without significantly affecting image quality in the context of ability to contour. The acceleration capabilities allowed for increased image resolution under similar scanning times as well as significantly improved urethra visualization in prostate simulations.

Book ChapterDOI
01 Jan 2022
TL;DR: The goal of this chapter is to facilitate research to draw conclusions on the feasibility of camera-based blood pressure monitoring, including video processing methods for acquiring arterial waveforms and signal processing methods drawn from contact measurement studies to convert these waveforms into blood pressure.
Abstract: Contactless monitoring of blood pressure with video cameras could improve hypertension awareness and control. The measurement principles include skin reflectance-mode photo-plethysmography and head ballistocardiography to acquire arterial waveforms and pulse transit time and waveform analysis to derive blood pressure. Studies of this approach are increasingly appearing in the literature. The goal of this chapter is to facilitate research to draw conclusions on the feasibility of camera-based blood pressure monitoring. First, the advantages of this approach over current or potential contact-based methods for measuring blood pressure are argued. Then, the theory is explained, including video processing methods for acquiring arterial waveforms and signal processing methods drawn from contact measurement studies to convert these waveforms into blood pressure. Next, the key experimental studies to date on the approach, as well as relevant contact sensor investigations, are summarized. Finally, recommendations for future research and an outlook on the approach are provided.

Journal ArticleDOI
TL;DR: In this paper, the authors used the information obtained from perfusion imaging to not only visualize vascular signal but also functional activation of resting state networks without acquiring any additional data besides the already available information.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the optimal labeling position and gradient moment for 4D-MR angiography based on superselective pseudo-continuous arterial spin labeling combined with CENTRA-keyhole and view-sharing for vessel-selective flow visualization of the internal carotid artery (ICA) and vertebrobasilar artery (VBA) systems.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated intranetwork and internet connectivity differences between patients with major depressive disorder (MDD) and healthy controls at the integrity, network and edge levels of 8 well-defined resting state networks.

Journal ArticleDOI
TL;DR: In this article, a simulation was performed to depict magnetic susceptibility structures of various geometries on susceptibility weighted imaging (SWI) phase and quantitative susceptibility mapping (QSM), and the results showed that the appearance of a sphere on SW-P ranged from centric nodule to mixed positive and negative values as the diameter increased.

Journal ArticleDOI
TL;DR: In this article, the DANTE-VF-RARE sequence was used for vessel wall imaging in ApoE-deficient (ApoE-/-) mice at 7 Tesla.


Book ChapterDOI
01 Jan 2022
TL;DR: This chapter focuses on the latest model-based method that was tested in near-infrared (NIR), and demonstrates that this latest approach has the character of a versatile tool in camera-PPG.
Abstract: Cameras and dedicated processing enable contactless monitoring of blood volume pulsation by detecting the pulse-induced subtle color variations from human skin surface. This technique, called Camera Photoplethysmography (camera-PPG), has been implemented in the visible and near infrared (NIR) ranges and prototyped for long-term, contactless, and continuous pulse-rate monitoring (24/7), for applications in various areas including clinical settings, assisted living, baby and elderly care, fitness, and automotive settings. In this chapter, we will review the development of the model-based camera-PPG technology. Furthermore, we focus on the latest model-based method that was tested in near-infrared (NIR). By applying the same technology to visible light in the challenging conditions of fitness, we demonstrate that this latest approach has the character of a versatile tool in camera-PPG.

DOI
01 Jan 2022
TL;DR: In this article, the feeding habits and dietary sources of three species of bathyal sea anemones (Actinostola callosa, Actinauge cristata, Urticina sp.) from the Northwest Atlantic were studied in a multi-faceted approach including stable isotopes, gastrovascular contents, lipid and fatty acids analysis and observations in a mesocosm.
Abstract: Sea anemones are often presented as a major component of benthic communities and described as ecologically important in benthic food webs. However, studies on the trophic ecology of deep-sea species are rare. Here, the feeding habits and dietary sources of three species of bathyal sea anemones (Actinostola callosa, Actinauge cristata, Urticina sp.) from the Northwest Atlantic were studied in a multi-faceted approach including stable isotopes, gastrovascular contents, lipid and fatty acids analysis and observations in a mesocosm. Stable isotope analysis showed that A. callosa sits at a slightly lower trophic level than A. cristata and Urticina sp. and that the two latter species rely on different carbon sources at roughly the same trophic level. The gastrovascular cavity contents and mesocosm study revealed that all three species ingest a variety of food items, from inorganic materials to a diversity of metazoans, including whole large prey. Total lipid content varied across species and was highest in Urticina sp. Phospholipids constituted the main lipid class in all three species, with consistently high levels of wax ester storage lipids. All sea anemones were also characterized by high proportions of mono and polyunsaturated fatty acids (MUFA and PUFA) as well as ω3 and ω9 FAs, and strikingly low proportions of 20:4ω6 (ARA). High values of 20:5ω3 (EPA), 20:1ω9 and 22:1ω11(13) evoke a diet centered on zooplankton, with notable particularities. For instance, Urticina sp. had the highest PUFA to saturated FAs ratio, indicative of carnivory. Overall, results suggest that the three sea anemone species occupy different niches in the spectrum of opportunistic polyphagous predation/feeding, with Urticina sp. relying chiefly on more energetic and larger prey and A. cristata targeting smaller zooplankton, foraminifera and particulate food, highlighting that large actinians play diverse roles in benthic food webs.

Book ChapterDOI
Sagdullayeva Setora1
01 Mar 2022
TL;DR: In this article , two approaches to collect input from participants in a test of game applications on their game experience are presented and discussed: think-aloud protocols and interviewing, and they are argued that the basic connection between the two approaches is that in both cases the participants are asked to verbalize their experiences, providing annotations on their interaction with an application or game.
Abstract: In this chapter, two approaches to collect input from participants in a test of game applications on their game experience are presented and discussed: think-aloud protocols and interviewing. It is argued that the basic connection between the two approaches is that in both cases the participants are asked to verbalize their experiences, providing annotations on their interaction with an application or game. Strengths and limitations of both approaches are discussed, and practical guidelines how to work with both approaches in tests are presented, along with some do’s and don’ts. The chapter ends with the conclusion that both approaches can provide very useful and insightful results in tests of game applications, but that it is smart to consider embedding these in an iterative, multi-method, test approach.

Book ChapterDOI
Razvan Gabriel Iagar1
01 Jan 2022

DOI
01 Jan 2022
TL;DR: The role of technology in ecosystem partnerships is to augment local skills and capacity in health services delivery, and to better connect information and health professionals in a way that enables them to take on new roles to help each other.
Abstract: The role of technology in ecosystem partnerships is to augment local skills and capacity in health services delivery, and to better connect information and health professionals in a way that enables them to take on new roles to help each other. Technology over the past three decades has resulted in dramatic acceleration in the progression from open surgery to minimally invasive, image-guided therapy (IGT). Today, imaging technologies like X-ray and ultrasound allow real-time, in-body visualization of instruments and anatomy without the need for surgical incisions. These advancements coupled with the ongoing miniaturization of endovascular and percutaneous devices have allowed interventional physicians to perform procedures often without general anesthesia and via incisions no larger than a few millimeters. This results in patient recoveries measured in hours to days, rather than weeks as seen with traditional open surgical repairs. Reduced costs, faster recoveries and shortened hospitalization stays are powerful enablers towards capacity building, empowering hospitals to treat more patients per day. Still, new and different skills will be required for doctors, nurses and technologists to perform and support image-guided interventions and minimally invasive surgery and to operate the sophisticated imaging systems, medical devices, and implants that are the backbone of these procedures. This chapter will describe how technologies like extended reality and robotics can be enablers of capacity building in LMIC, if enabled by a strong ecosystem partnership.

Book ChapterDOI
K.S. Srinivas1
01 Jan 2022

Posted ContentDOI
Engel Roza1
12 May 2022
TL;DR: In this paper , the authors discuss the possible impact on the present state of particle physics theory of two unrecognized theoretical elements, namely, the awareness that the quark is a Dirac particle with a polarisable dipole moment in a scalar field and that Dirac&rsquo;s wave equation for fermions, if derived from Einstein's geodesic equation, reveals a scaling theorem for quarks.
Abstract: In this article the possible impact on the present state of particle physics theory is discussed of two unrecognized theoretical elements. These elements are the awareness that (a) the quark is a Dirac particle with a polarisable dipole moment in a scalar field and that (b) Dirac&rsquo;s wave equation for fermions, if derived from Einstein&rsquo;s geodesic equation, reveals a scaling theorem for quarks. It is shown that recognition of these elements proves by theory quite some relationships that are up to now only empirically assessed, such as for instance, the mass relationships between the elementary quarks, the relationship between the bare mass and the constituent mass of quarks, the mass spectrum of hadrons and the mass values of the Z boson and the Higgs boson.


Journal Article
Li Wang1
01 Dec 2022

Posted ContentDOI
Engel Roza1
19 Dec 2022
TL;DR: In this paper , it was shown that the nuclear background energy in the vacuum is polarized by quarks, thereby not only explaining the narrow range of the strong interaction, but also giving an explanation for the left-handiness of neutrinos.
Abstract: Starting from a simplified survey of Fermi&rsquo;s neutrino theory, it is shown that the nuclear background energy in the vacuum is polarized by quarks, thereby not only explaining the narrow range of the strong interaction, but also giving an explanation for the left-handiness of neutrinos. Recognizing that such a vacuum polarization explains the cosmological dark matter and dark energy phenomena, it is hypothesized that the elementary constituents of the nuclear background energy and the cosmological background energy are the same. The hypothesis is supported by an assessment of the quark&rsquo;s &ldquo;naked&rdquo; mass.

Posted ContentDOI
Engel Roza1
27 Jan 2022
TL;DR: In this paper , it was shown that the four fundamental physical forces, i.e. weak interaction, strong interaction, electromagnetism and gravity, all have their origin in the quark as the single true elementary particle.
Abstract: It is shown that the four fundamental physical forces, i.e. weak interaction, strong interaction, electromagnetism and gravity, all have their origin in the quark as the single true elementary particle.This requires conceiving the quark as a Dirac particle in a pseudo-tachyon mode, which possesses two real dipole moments: the common one associated with its angular momentum and a second one that is polarisable in a scalar field. This Dirac particle carries a regular charge magnetic monopole without Dirac&rsquo;s string, theorized by Comay. The boson carrier of its field of energy is the gluon showing an exponential decay of its spatial range because of the influence of an omni-present energetic background field, known as the Higgs field, in this article interpreted as the Lambda in Einstein&rsquo;s Field Equation.