scispace - formally typeset
Search or ask a question

Showing papers on "Pixelization published in 2020"


Journal ArticleDOI
TL;DR: Using the proposed method, a parasitic decoupling element (resonator) between two microstrip patch antennas, which were placed close to each other, fulfilled the design objectives and reduced the MC between radiating elements by 24 dB, while preserving the radiation pattern and impedance matching.
Abstract: This letter introduces a systematic procedure to reduce the mutual coupling (MC) between radiating elements in a microstrip array antenna. The proposed method is based on using parasitic elements between the radiating elements. The shape of the parasitic element, which impacts the MC and other radiation properties of the antenna, is determined in a systematic pattern optimization procedure based on pixelization of the area between the antennas and application of a binary optimization algorithm. A bit with binary values of 1 or 0 is assigned to each pixel, which signifies the presence or absence of copper on the pixel surface, respectively. Afterwards, during a binary particle swarm optimization algorithm, the optimal value of each bit is obtained, which results in the optimum shape of the parasitic element. In the optimization process, reducing the MC between the radiating elements is set as the main optimization objective while maintaining good antenna impedance matching and preserving of the radiation pattern. Using the proposed method, we designed a parasitic decoupling element (resonator) between two microstrip patch antennas, which were placed close to each other. The results demonstrate that adding this resonator between the two antennas fulfilled the design objectives and reduced the MC between radiating elements by 24 dB, while preserving the radiation pattern and impedance matching. The results of the simulations have been verified by fabrication and measurements.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a method that uses a multimode illumination fiber to project different high-resolution patterns on the inspected sample is described. And a post processing algorithm eliminates the pixelization effects and increases the imaging resolution.

4 citations


Journal ArticleDOI
TL;DR: In this paper, principal component analysis is applied towards digitized signals observed in semiconductor detectors with pixelated anodes in order to maximize the utility of digitised signals and improve energy resolution.
Abstract: Semiconductor detectors with pixelated anodes offer a desirable combination of position sensitivity and energy resolution that are suitable for numerous applications in gamma-ray detection and imaging. Pixelization of electrodes also entails a non-uniform amplitude response as a function of gamma-ray interaction position. Energy calibrations as a function of interaction position improve energy resolution, but systematic errors in the energy reconstruction process persist and limit energy resolution at fixed electronic noise. Digitization of signals induced on collecting and adjacent electrodes offers rich information about gamma-ray interactions and their charge drift and collection as a function of time. Despite the abundance of signals for each interaction, human intuition and rule-based approaches fail to utilize the entire set of observed signals to mitigate the sources of systematic error. In order to maximize the utility of digitized signals and improve energy resolution, principal component analysis is applied towards digitized signals observed in semiconductor detectors with pixelated anodes. Principal components identified in the process form the basis for position-specific energy corrections. By leveraging this additional information encoded within digitized signals, energy resolution improves by factors between 10 and 15% with respect to full-width at half and tenth maximum values. The feature that accounts for the maximum amount of explained variance between interactions in any given pixel correlates strongly with the depth of interaction.

3 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach, called the Pixelization method, which transforms event data into images by using a generative model with a novel convolution technique, and shows that this approach outperforms the state-of-the-art methods in terms of accuracy.

3 citations


Posted Content
TL;DR: In this paper, it is argued that when PIV is used to measure turbulence, it can be treated as a time-dependent signal and that the output velocity consists of three primary contributions: the timedependent velocity, a noise arising from the quantization (or pixelization), and a noise contribution from the fact that the velocity is not uniform inside the interrogation volume.
Abstract: It is argued herein that when PIV is used to measure turbulence, it can be treated as a time-dependent signal. The `output' velocity consists of three primary contributions: the time-dependent velocity, a noise arising from the quantization (or pixelization), and a noise contribution from the fact that the velocity is not uniform inside the interrogation volume. For both of the latter their variances depend inversely on the average number of particles or images) in this interrogation volume. All three of these are spatially filtered by the finite extent of the interrogation window. Since the above noises are associated directly with the individual particles (or particle images), the noise between different realizations and different interrogation volumes is statistically independent.

3 citations


Posted Content
TL;DR: This framework synthesizes high-quality face reconstructions, demonstrating that given the statistical prior of a human face, multiple aligned pixelated frames contain sufficient information to reconstruct a high- quality approximation of the original signal.
Abstract: We present a simple method to reconstruct a high-resolution video from a face-video, where the identity of a person is obscured by pixelization. This concealment method is popular because the viewer can still perceive a human face figure and the overall head motion. However, we show in our experiments that a fairly good approximation of the original video can be reconstructed in a way that compromises anonymity. Our system exploits the simultaneous similarity and small disparity between close-by video frames depicting a human face, and employs a spatial transformation component that learns the alignment between the pixelated frames. Each frame, supported by its aligned surrounding frames, is first encoded, then decoded to a higher resolution. Reconstruction and perceptual losses promote adherence to the ground-truth, and an adversarial loss assists in maintaining domain faithfulness. There is no need for explicit temporal coherency loss as it is maintained implicitly by the alignment of neighboring frames and reconstruction. Although simple, our framework synthesizes high-quality face reconstructions, demonstrating that given the statistical prior of a human face, multiple aligned pixelated frames contain sufficient information to reconstruct a high-quality approximation of the original signal.

2 citations


Proceedings ArticleDOI
31 Oct 2020
TL;DR: In this article, the AC-coupled Low-Gain Avalanche Diode (AC-LGAD) approach was introduced, where the signal is capacitively induced on fine-pitched electrodes placed over an insulator.
Abstract: A new class of silicon detectors, the Low-Gain Avalanche Diode (LGAD), has already shown excellent timing performances O(30-40 ps) and is being considered for several applications in the most diverse fields, ranging from timing in high energy physics experiments to medical imaging. LGADs typically exhibit a low to moderate gain (5 - 100), short rise time and high signal-to-noise ratio. Fine pixelization of LGADs is difficult to achieve, and to provide fine spatial resolution the AC-coupled LGAD (AC-LGAD) approach was introduced. In this type of device, the signal is capacitively induced on fine-pitched electrodes placed over an insulator. LGAD and AC-LGAD prototypes have been designed and fabricated at Brookhaven National Laboratory and segmented in both pixel matrices and strips. These prototypes are characterized via Transient Current Technique (TCT) using a fast-pulsed IR laser. The timing resolution is measured by studying the time coincidence between signals generated by a beta particle beam from a 90Sr source in two sensors or by using a collimated proton beam. Additional studies of the sensor's performance have been carried out, including the induced signal in neighbouring pixels and strips.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this article, a methodology called pixelization is presented to characterize and design an optimal connection scheme with for non-planar applications, which is the grouping of similarly producing cells for the purpose of increasing energy conversion efficacy.
Abstract: Non-planar photovoltaics (NP-PV) are an emerging application space enabled by advances in thin-film photovoltaic (PV) materials. However, there are challenges to the practical use due to the complexities introduced with a non-planar PV surface. Curved surfaces have built-in partial shading and even self-shading - phenomenon that leads to non-uniform current densities. The conditioning electronics responsible for the maximizing production is highly dependent on how the cells are connected. Specifically, maximum power point tracking performance is contingent on the impact partial shading has on the circuit. Shaded cells in a series connected systems inhibit the flow of current which reduces the output power and can cause irreparable damage to the material. Since non-planar self-shading can be predicted, an optimal configuration that can reduce current variance while increasing usable output power is possible. By analyzing the production patterns of the surface, this paper presents a methodology, called pixelization, to characterize and design an optimal connection scheme with for non-planar applications. Pixelization is the grouping of similarly producing cells for the purpose of increasing energy conversion efficacy. The results of this work give insight into how geometry and surface symmetry affect how non-planar can be designed to maximize the potential of the geometry.

Journal ArticleDOI
01 Jan 2020
TL;DR: In this article, a direct (non-iterative) algorithm to reconstruct the 3D momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional projected image captured by a position-sensitive detector is presented.
Abstract: In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

Proceedings ArticleDOI
05 Jul 2020
TL;DR: A systematic procedure to reduce mutual coupling between linear array microstrip antennas is proposed based on pixelization of the surface between two patch elements and the application of a binary particle swarm optimization (BPSO) algorithm.
Abstract: In this paper, a systematic procedure to reduce mutual coupling between linear $2 \times 1$ array microstrip antennas is proposed. The proposed design procedure is based on pixelization of the surface between two patch elements and the application of a binary particle swarm optimization (BPSO) algorithm. In the binary optimization algorithm, each pixel is represented by a bit that gets a binary value of 1 or 0 indicating the presence or absence of copper on the area of the pixel. The optimization goal is set to minimizing the mutual coupling between two elements while the impedance matching and radiation pattern are not degraded. The concept of the work is validated by full-wave simulations and measurements.

Posted Content
TL;DR: This work revisits the question of nonhierarchical sphere pixelization based on cube symmetries and develops a new one dubbed the ”Similar Radius Sphere Pixelization” (SARSPix) with very close to square pixels that provides the most adapted indexing over the sphere for all distance-related computations.
Abstract: The counting of pairs of galaxies or stars according to their distance is at the core of all real-space correlation analyzes performed in astrophysics and cosmology. The next stage upcoming ground (LSST) and space (Euclid) surveys will measure properties of billions of galaxies and tomographic shells will contain hundreds of millions of objects. The combinatorics of the pair count challenges our ability to perform such counting in a minute-scale time which is the order of magnitude useful for optimizing analyses through the intensive use of simulations. The problem is not CPU intensive and is only limited by an efficient access to the data, hence it belongs to the "big data" category. We use the popular Apache Spark framework to address it and design an efficient high-throughput algorithm to deal with hundreds of millions to billions of input data. To optimize it, we revisit the question of nonhierarchical sphere pixelization based on cube symmetries and develop a new one that we call the "Similar Radius Sphere Pixelization" (SARSPix) with square-like pixels. It provides the most adapted sphere packing for all distance-related computations. Using LSST-like fast simulations, we compute autocorrelation functions on tomographic bins containing between a hundred million to one billion data points. In all cases we achieve the full construction of a classical pair-distance histogram in about 2 minutes, using a moderate number of worker nodes (16 to 64). This is typically two orders of magnitude higher than what is achieved today and shows the potential of using these new techniques in the field of astronomy on ever-growing datasets. The method presented here is flexible enough to be adapted to any medium size cluster and the software is publicly available from https://github.com/LSSTDESC/SparkCorr.