scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors used a 4-week within-person experience sampling field experiment where camera use was manipulated to better understand how camera-impacts fatigue, which may affect outcomes during meetings (e.g., participant voice and engagement).
Abstract: The COVID-19 pandemic propelled many employees into remote work arrangements, and face-to-face meetings were quickly replaced with virtual meetings. This rapid uptick in the use of virtual meetings led to much popular press discussion of virtual meeting fatigue (i.e., "Zoom fatigue"), described as a feeling of being drained and lacking energy following a day of virtual meetings. In this study, we aimed to better understand how one salient feature of virtual meetings-the camera-impacts fatigue, which may affect outcomes during meetings (e.g., participant voice and engagement). We did so through the use of a 4-week within-person experience sampling field experiment where camera use was manipulated. Drawing from theory related to self-presentation, we propose and test a model where study condition (camera on versus off) was linked to daily feelings of fatigue; daily fatigue, in turn, was presumed to relate negatively to voice and engagement during virtual meetings. We further predict that gender and organizational tenure will moderate this relationship such that using a camera during virtual meetings will be more fatiguing for women and newer members of the organization. Results of 1,408 daily observations from 103 employees supported our proposed model, with supplemental analyses suggesting that fatigue affects same-day and next-day meeting performance. Given the anticipated prevalence of remote work even after the pandemic subsides, our study offers key insights for ongoing organizational best practices surrounding virtual meetings. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

81 citations


Journal ArticleDOI
TL;DR: The first systematic study of immersive navigation techniques for 3D scatterplots is reported, evaluating them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.
Abstract: data has no natural scale and so interactive data visualizations must provide techniques to allow the user to choose their viewpoint and scale. Such techniques are well established in desktop visualization tools. The two most common techniques are zoom+pan and overview+detail. However, how best to enable the analyst to navigate and view abstract data at different levels of scale in immersive environments has not previously been studied. We report the findings of the first systematic study of immersive navigation techniques for 3D scatterplots. We tested four conditions that represent our best attempt to adapt standard 2D navigation techniques to data visualization in an immersive environment while still providing standard immersive navigation techniques through physical movement and teleportation. We compared room-sized visualization versus a zooming interface, each with and without an overview. We find significant differences in participants' response times and accuracy for a number of standard visual analysis tasks. Both zoom and overview provide benefits over standard locomotion support alone (i.e., physical movement and pointer teleportation). However, which variation is superior, depends on the task. We obtain a more nuanced understanding of the results by analyzing them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.

32 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined facial dissatisfaction-feeling unhappy about one's own facial appearance-as a potential psychological mechanism of virtual meeting fatigue and suggested that practical approaches to mitigating VM fatigue could include implementing technological features that reduce self-focused attention during VMs (e.g., employing avatars).
Abstract: Viewing self-video during videoconferences potentially causes negative self-focused attention that contributes to virtual meeting (VM) or "Zoom" fatigue. The present research examines this proposition, focusing on facial dissatisfaction-feeling unhappy about one's own facial appearance-as a potential psychological mechanism of VM fatigue. A study of survey responses from a panel of 613 adults found that VM fatigue was 14.9 percent higher for women than for men, and 11.1 percent higher for Asian than for White participants. These gender and race/ethnicity differences were found to be mediated by facial dissatisfaction. This study replicates earlier VM fatigue research, extends the theoretical understanding of facial dissatisfaction as a psychological mechanism of VM fatigue, and suggests that practical approaches to mitigating VM fatigue could include implementing technological features that reduce self-focused attention during VMs (e.g., employing avatars).

31 citations


Journal ArticleDOI
TL;DR: In this paper, simultaneous recordings of three repetitions of the cardinal vowels were made using a Zoom H6 Handy Recorder with an external microphone (henceforth, H6) and compared with two alternatives accessible to potential participants at home.
Abstract: Face-to-face speech data collection has been next to impossible globally as a result of the COVID-19 restrictions. To address this problem, simultaneous recordings of three repetitions of the cardinal vowels were made using a Zoom H6 Handy Recorder with an external microphone (henceforth, H6) and compared with two alternatives accessible to potential participants at home: the Zoom meeting application (henceforth, Zoom) and two lossless mobile phone applications (Awesome Voice Recorder, and Recorder; henceforth, Phone). F0 was tracked accurately by all of the devices; however, for formant analysis (F1, F2, F3), Phone performed better than Zoom, i.e., more similarly to H6, although the data extraction method (VoiceSauce, Praat) also resulted in differences. In addition, Zoom recordings exhibited unexpected drops in intensity. The results suggest that lossless format phone recordings present a viable option for at least some phonetic studies.

23 citations


Journal ArticleDOI
TL;DR: In this paper, simultaneous recordings of three repetitions of the cardinal vowels were made using a Zoom H6 Handy Recorder with external microphone (henceforth H6) and compared with two alternatives accessible to potential participants at home.
Abstract: Face-to-face speech data collection has been next to impossible globally due to COVID-19 restrictions. To address this problem, simultaneous recordings of three repetitions of the cardinal vowels were made using a Zoom H6 Handy Recorder with external microphone (henceforth H6) and compared with two alternatives accessible to potential participants at home: the Zoom meeting application (henceforth Zoom) and two lossless mobile phone applications (Awesome Voice Recorder, and Recorder; henceforth Phone). F0 was tracked accurately by all devices; however, for formant analysis (F1, F2, F3) Phone performed better than Zoom, i.e. more similarly to H6, though data extraction method (VoiceSauce, Praat) also resulted in differences. In addition, Zoom recordings exhibited unexpected drops in intensity. The results suggest that lossless format phone recordings present a viable option for at least some phonetic studies.

19 citations


Proceedings ArticleDOI
14 Apr 2021
TL;DR: FakeBuster as discussed by the authors is a standalone deep learning-based solution, which enables a user to detect if another person's video is manipulated or spoofed during a video conference-based meeting.
Abstract: This paper proposes FakeBuster, a novel DeepFake detector for (a) detecting impostors during video conferencing, and (b) manipulated faces on social media. FakeBuster is a standalone deep learning- based solution, which enables a user to detect if another person’s video is manipulated or spoofed during a video conference-based meeting. This tool is independent of video conferencing solutions and has been tested with Zoom and Skype applications. It employs a 3D convolutional neural network for predicting video fakeness. The network is trained on a combination of datasets such as Deeperforensics, DFDC, VoxCeleb, and deepfake videos created using locally captured images (specific to video conferencing scenarios). Diversity in the training data makes FakeBuster robust to multiple environments and facial manipulations, thereby making it generalizable and ecologically valid.

18 citations


Journal ArticleDOI
TL;DR: Guided zoom as discussed by the authors uses explainability to improve model performance by making sure the model has "the right reasons" for a prediction, which is defined as the grounding, in the pixel space, for a specific class conditional probability in the model output.
Abstract: In state-of-the-art deep single-label classification models, the top- $k$ k $(k=2,3,4, \dots)$ ( k = 2 , 3 , 4 , ⋯ ) accuracy is usually significantly higher than the top-1 accuracy. This is more evident in fine-grained datasets, where differences between classes are quite subtle. Exploiting the information provided in the top $k$ k predicted classes boosts the final prediction of a model. We propose Guided Zoom, a novel way in which explainability could be used to improve model performance. We do so by making sure the model has “the right reasons” for a prediction. The reason/evidence upon which a deep neural network makes a prediction is defined to be the grounding, in the pixel space, for a specific class conditional probability in the model output. Guided Zoom examines how reasonable the evidence used to make each of the top- $k$ k predictions is. Test time evidence is deemed reasonable if it is coherent with evidence used to make similar correct decisions at training time. This leads to better informed predictions. We explore a variety of grounding techniques and study their complementarity for computing evidence. We show that Guided Zoom results in an improvement of a model's classification accuracy and achieves state-of-the-art classification performance on four fine-grained classification datasets. Our code is available at https://github.com/andreazuna89/Guided-Zoom .

17 citations


Journal ArticleDOI
01 Mar 2021
TL;DR: In this article, the primary disk, network, and memory forensic analysis of the Zoom video conferencing application is presented, which demonstrates that it is possible to find users' critical information in plain text and/or encrypted/encoded, such as chat messages, names, email addresses, passwords, and much more through network captures, forensic imaging of digital devices and memory forensics.
Abstract: The global pandemic of COVID-19 has turned the spotlight on video conferencing applications like never before. In this critical time, applications such as Zoom have experienced a surge in its user base jump over the 300 million daily mark ( ZoomBlog, 2020 ). The increase in use has led malicious actors to exploit the application, and in many cases perform Zoom Bombings. Therefore forensically examining Zoom is inevitable. Our work details the primary disk, network, and memory forensic analysis of the Zoom video conferencing application. Results demonstrate it is possible to find users' critical information in plain text and/or encrypted/encoded, such as chat messages, names, email addresses, passwords, and much more through network captures, forensic imaging of digital devices, and memory forensics. Furthermore we elaborate on interesting anti-forensics techniques employed by the Zoom application when contacts are deleted from the Zoom application's contact list.

15 citations


Journal ArticleDOI
Jiang Zhao1, Di Wang1, Yi Zheng1, Chao Liu1, Qiong-Hua Wang1 
TL;DR: In this article, a continuous optical zoom microscopy imaging system based on liquid lenses is proposed, which is composed of four electrowetting liquid lenses and six glass lenses and can continuously change the magnification of the targets in real-time.
Abstract: In this paper, a continuous optical zoom microscopy imaging system based on liquid lenses is proposed. Compared with traditional microscopes, which have discrete magnification, requiring manual conversion of the objective lens to change the magnification, the proposed microscope can continuously change the magnification of the targets in real-time. An adaptive zoom microscope, a liquid lens driving board, a microscope bracket, an adjustable three-dimensional stage and a light source are stacked to form the main framework of the continuous optical zoom microscopy imaging system. The adaptive zoom microscope which is composed of four electrowetting liquid lenses and six glass lenses form the main imaging element of the microscope. By changing the driving voltage which is applied to the four liquid lenses, the focal length of the liquid lenses can be modulated to achieve continuous zooming. By contrast, in traditional microscopes, the zooming process can only be achieved by rotating the eyepieces at different magnifications. At a fixed working distance, the magnification of the proposed microscope can change continuously from ∼9.6× to ∼22.2× with a response time of ∼50ms. Moreover, an axial depth scanning of ∼1000µm can be achieved without any mechanical movement. Our experiments proved that the microscope has stable performance and high consistency during zooming. Therefore, the proposed microscope has obvious advantages over the traditional microscopes in observing dynamic samples with different magnifications and can be commercialized for further expanding the applications in biochemical and pathological analysis.

14 citations


Journal ArticleDOI
30 Jun 2021
TL;DR: The findings of this study show that it can be easier for lecturers and students to interact synchronously in the learning process by applying the zoom meeting application to learning during this pandemic.
Abstract: In the midst of the COVID-19 pandemic outbreak, almost all Indonesians, including Bengkulu province, expect to experience the spike in positive cases of COVID-19, particularly in the Rejang Lebong Regency, resulting in many very significant changes in almost all fields, especially in the field of education The learning process, consisting of synchronous learning, is carried out internet (online), namely face-to-face by video calls/ zoom meetings and asynchronous, namely by assignments Using observational data methods, interviews, and notes, the research approach used is a qualitative research method The aim of this study is to provide some explanations of the use of the zoom meeting application during a pandemic in online learning, to evaluate the constraints of using the zoom meeting application and the benefits of the application for zoom meeting from several features during a pandemic in online learning The findings of this study show that it can be easier for lecturers and students to interact synchronously in the learning process by applying the zoom meeting application to learning during this pandemic

14 citations


Journal ArticleDOI
Shaopeng Hu1, Kohei Shimasaki1, Mingjun Jiang1, Taku Senoo1, Idaku Ishii1 
TL;DR: In this paper, a dual-camera system that can simultaneously capture zoomed-in images using an ultrafast pan-tilt camera and a fixed wide-view camera using deep learning methods is presented.
Abstract: This paper presents a novel dual-camera system that can simultaneously capture zoomed-in images using an ultrafast pan-tilt camera and a fixed wide-view camera using deep learning methods. An ultrafast pan-tilt camera can function as multiple virtual pan-tilt cameras by synchronizing a high-frame-rate zooming-view camera and an ultrafast pan-tilt mirror device that can switch over 500 different views in a second. A wide-view camera can obtain images in a fixed view in which multiple targets to be tracked are captured in low resolution and then recognized by processing the images using deep learning methods at a rate of dozens of frames per second. Based on the positions of all targets, recognized by the wide-view camera, the pan and tilt angles of multiple pan-tilt cameras are virtually controlled using an ultrafast pan-tilt camera through multithread viewpoint control to simultaneously capture the zoomed-in images of all targets. Our developed system can operate 10 virtual pan-tilt cameras (25 fps) with multithread viewpoint control and 4 ms time granularity in synchronization with convolutional neural-network-based recognition model operating at 25 fps, which is accelerated by a general-purpose computing on graphics processing units. The effectiveness of our system was demonstrated by the results of several experiments conducted on simultaneous zoom shooting of multiple running objects (persons and cars) in the range of approximately 80 m or higher in a natural outdoor scene, which was formerly too wide for a single fixed camera to capture clearly and simultaneously.

Journal ArticleDOI
TL;DR: The proposed calibration method will broaden the application range for 3D digital image correlation in scientific and engineering fields.

Journal ArticleDOI
TL;DR: In this article, a degradation generation network is used to generate realistic LR images and then a degradation adaptive super-resolution network is trained to zoom out the generated LR images to the real LR images.
Abstract: Most learning-based super-resolution (SR) methods aim to recover high-resolution (HR) image from a given low-resolution (LR) image via learning on LR-HR image pairs. The SR methods learned on synthetic data do not perform well in real-world, due to the domain gap between the artificially synthesized and real LR images. Some efforts are thus taken to capture real-world image pairs. However, the captured LR-HR image pairs usually suffer from unavoidable misalignment, which hampers the performance of end- to-end learning. Here, focusing on the real-world SR, we ask a different question: since misalignment is unavoidable, can we propose a method that does not need LR-HR image pairing and alignment at all and utilizes real images as they are? Hence we propose a framework to learn SR from an arbitrary set of unpaired LR and HR images and see how far a step can go in such a realistic and “unsupervised” setting. To do so, we firstly train a degradation generation network to generate realistic LR images and, more importantly, to capture their distribution (i.e., learning to zoom out). Instead of assuming the domain gap has been eliminated, we minimize the discrepancy between the generated data and real data while learning a degradation adaptive SR network (i.e., learning to zoom in). The proposed unpaired method achieves state-of- the-art SR results on real-world images, even in the datasets that favour the paired-learning methods more.

Journal ArticleDOI
TL;DR: In this paper, a phenomenological exploratory multiple-case study design was conducted at an open distance e-learning university and a traditional contact residential university, and it was found that the participants viewed video conferencing under the COVID-19 lockdown period as an exhausting experience.
Abstract: This phenomenological exploratory multiple-case study design was conducted at an open distance e-learning university and a traditional contact residential university, and it was found that the participants viewed video conferencing under the COVID-19 lockdown period as an exhausting experience. A second major finding revealed that the participants were empowered with digital literacy skills to use video conferencing effectively. The current findings add to a growing body of literature on video conferencing with a focus on Zoom fatigue. Further research might explore the lived Zoom experiences of administrators, students, and a larger group of faculties over a longer period. The study findings must be considered when planning and implementing video conferencing for academics and students in open distance e-learning contexts. This study showed that video conferencing is one tool in the emergence of a digital zoom revolution that has radically changed the workspace. The evidence from this study suggests that Zoom fatigue is a reality check for work-related health management. © 2021 IGI Global. All rights reserved.

Journal ArticleDOI
Xiao Yu1, Hanyu Wang1, Yuan Yao1, Songnian Tan1, Yongsen Xu1, Yalin Ding1 
TL;DR: In this article, a method for the automatic design of a special mid-wavelength infrared zoom system in which the positions of both the pupil planes and the image plane are fixed during the zooming process is presented.
Abstract: This paper presents a method for the automatic design of a special mid-wavelength infrared zoom system in which the positions of both the pupil planes and the image plane are fixed during the zooming process. In this method, the formulas for the desired zoom system are derived to ensure the exact fulfillment of the conditions with three moving components based on Gaussian reduction. A mathematical model is established based on the particle swarm optimization to determine the first-order parameters of the paraxial design. Then, the model is optimized by iteratively updating a candidate solution with regard to a specific merit function that characterizes the zoom ratio, compactness, and aberration terms. In the optimization phase, the physical feasibility is considered as the constraint on the candidate solutions. Using two examples, this work demonstrates that the developed method is an efficient and practical tool for finding a realizable initial configuration of a dual-conjugate zoom system. Since this method is no longer reliant on the traditional trial-and-error technique, it is an important step toward the automatic design of complex optical systems using artificial intelligence.

Journal ArticleDOI
TL;DR: Different from existing video satellite image SR methods, ASSR can continuously zoom LR video satellite images with arbitrary integer and noninteger scale factors in a single model and has superior reconstruction performance compared with the state-of-the-art SR methods.
Abstract: Super-resolution (SR) has attracted increasing attention as it can improve the quality of video satellite images. Most previous studies only consider several integer magnification factors and focus on obtaining a specific SR model for each scale factor. However, in the real world, it is a common requirement to zoom the videos arbitrarily by rolling the mouse wheel. In this article, we propose a unified network for arbitrary scale SR (ASSR) of video satellite images. The proposed ASSR consists of two modules, i.e., feature learning module and arbitrary upscale module. The feature learning module accepts multiple low-resolution (LR) frames and extracts useful features of those frames by using many 3-D residual blocks. The arbitrary upscale module takes the extracted features as input and enhances the spatial resolution by subpixel convolution and bicubic-based adjustment. Different from existing video satellite image SR methods, ASSR can continuously zoom LR video satellite images with arbitrary integer and noninteger scale factors in a single model. Experiments have been conducted on real video satellite images acquired by Jilin-1 and OVS-1. Quantitative and qualitative results have demonstrated that ASSR has superior reconstruction performance compared with the state-of-the-art SR methods.

Book ChapterDOI
TL;DR: In this paper, the authors investigate the flow-rate fairness of video conferencing congestion control at the example of Zoom and influences of deploying AQM. They find that Zoom is slow to react to bandwidth changes and uses two to three times the bandwidth of TCP in lowbandwidth scenarios.
Abstract: Congestion control is essential for the stability of the Internet and the corresponding algorithms are commonly evaluated for interoperability based on flow-rate fairness. In contrast, video conferencing software such as Zoom uses custom congestion control algorithms whose fairness behavior is mostly unknown. Aggravatingly, video conferencing has recently seen a drastic increase in use - partly caused by the COVID-19 pandemic - and could hence negatively affect how available Internet resources are shared. In this paper, we thus investigate the flow-rate fairness of video conferencing congestion control at the example of Zoom and influences of deploying AQM. We find that Zoom is slow to react to bandwidth changes and uses two to three times the bandwidth of TCP in low-bandwidth scenarios. Moreover, also when competing with delay aware congestion control such as BBR, we see high queuing delays. AQM reduces these queuing delays and can equalize the bandwidth use when used with flow-queuing. However, it then introduces high packet loss for Zoom, leaving the question how delay and loss affect Zoom's QoE. We hence show a preliminary user study in the appendix which indicates that the QoE is at least not improved and should be studied further.

Journal ArticleDOI
TL;DR: The benefit of adaptive meshing strategies for a recently introduced thermodynamic topology optimization is presented, and the adaptivity can be used to zoom into coarse global structures to better resolve details of interesting spots such as truss nodes.
Abstract: The benefit of adaptive meshing strategies for a recently introduced thermodynamic topology optimization is presented. Employing an elementwise gradient penalization, stability is obtained and checkerboarding prevented while very fine structures can be resolved sharply using adaptive meshing at material-void interfaces. The usage of coarse elements and thereby smaller design space does not restrict the obtainable structures if a proper adaptive remeshing is considered during the optimization. Qualitatively equal structures and quantitatively the same stiffness as for uniform meshing are obtained with less degrees of freedom, memory requirement and overall optimization runtime. In addition, the adaptivity can be used to zoom into coarse global structures to better resolve details of interesting spots such as truss nodes.

Journal ArticleDOI
TL;DR: This paper uses Gaussian brackets and a particle swarm optimization (PSO) algorithm to design a 20× four-group zoom lens with a focus tunable lens and two moving groups and shows that this method is suitable for the first-order design of complex optical systems.
Abstract: In this paper, we use Gaussian brackets and a particle swarm optimization (PSO) algorithm to design a 20× four-group zoom lens with a focus tunable lens and two moving groups. This method uses Gaussian brackets to derive the paraxial design equations of the zoom lens and determine the lens parameters. In the optimization stage, we define an objective function as a performance indicator to optimize its first-order design and implement the PSO algorithm in MATLAB to find its global optimal first-order design. The optimized 20× zoom lens has a focal length of 4.5-90 mm and a total length of 145 mm. This method solves the difficulty of solving the initial structure of the zoom. Compared with traditional trial and error, the calculation speed is faster, the accuracy is higher, and it does not rely on the initial value. The results show that this method is suitable for the first-order design of complex optical systems.

Journal ArticleDOI
29 Jul 2021
TL;DR: The results showed that the students had difficulties in providing internet networks and need a balance between quota facilities, internet networks, a variety of other learning media, and learning activities that suit student needs.
Abstract: Online learning and teaching are some of the appropriate learning concepts during the current Covid-19 pandemic. This research aims to get in-depth information about the implementation of zoom cloud meetings in the online learning process. This evaluation is carried out to get a solution to the problem while using zoom cloud meeting. The study used a qualitative approach with a survey method. Data were collected through observation, interviews, and questionnaires. The data analysis technique used the stages of reduction, data presentation, data analysis, and concluding. The results showed that the students had difficulties in providing internet networks. Students want each zoom activity to be more interactive and can be varied with other technology applications. In addition, students also need a balance between quota facilities, internet networks, a variety of other learning media, and learning activities that suit student needs. The results of this study have implications for the development of online learning activities according to the ability of students' backgrounds.

Journal ArticleDOI
TL;DR: A compact varifocal panoramic annular lens (PAL) system based on the four-component mechanical zoom method, which solves the problem that the traditional PAL system cannot zoom in to the region of interest by moving the zoom group and the compensation group.
Abstract: A compact varifocal panoramic annular lens (PAL) system based on the four-component mechanical zoom method is proposed, which solves the problem that the traditional PAL system cannot zoom in to the region of interest. By moving the zoom group and the compensation group, our design achieves continuous zooming, in which the focal length changes from 3.8 to 6 mm. It can keep the position of the image surface unchanged while maintaining a compact structure. The system has a field of view (FoV) of 25°–100° in wide-angle mode and an FoV of 25°–65° in telephoto mode. The modulation transfer function of the wide-angle view is higher than 0.22 at 147 lp/mm. The F-theta distortion is less than 3%, and the relative illuminance is higher than 0.9 in the zoom process. Compared with the zoom PAL system with multiple free-form aspheric surfaces, the proposed system uses multiple spherical lenses and only one Q-type asphere lens to achieve outstanding panoramic zoom imaging results. It is practical and straightforward, easy to manufacture, detect, and mass produce.

Proceedings ArticleDOI
15 Jun 2021
TL;DR: In this article, the visual tracking of an evading UAV using a pursuer-UAV is examined, which combines principles of deep learning, optical flow, intra-frame homography and correlation based tracking.
Abstract: In this work the visual tracking of an evading UAV using a pursuer-UAV is examined. The developed method combines principles of deep learning, optical flow, intra-frame homography and correlation based tracking. A Yolo tracker for short term tracking is employed, complimented by optical flow and homography techniques. In case there is no detected evader-UAV, the MOSSE tracking algorithm re-initializes the search and the PTZ-camera zooms-out to cover a wider Filed of View. The camera's controller adjusts the pan and tilt angles so that the evader-UAV is as close to the center of view as possible, while its zoom is commanded in order to for the captured evader-UAV bounding box cover as much as possible the captured-frame. Experimental studies are offered to highlight the algorithm's principle and evaluate its performance.

Journal ArticleDOI
TL;DR: In this article, the authors focus on state-of-the-art research on optical zoom imaging systems that use adaptive liquid lenses and provide a snapshot of the current state of the art.
Abstract: An optical zoom imaging system that can vary the magnification factor without displacing the object and the image plane has been widely used. Nonetheless, conventional optical zoom imaging systems suffer slow response, complicated configuration, vulnerability to misalignment during zoom operation, and are incompatible for miniaturized applications. This review article focuses on state-of-the-art research on novel optical zoom imaging systems that use adaptive liquid lenses. From the aspect of the configuration, according to the amount of the adaptive liquid lenses, we broadly divide the current optical zoom imaging systems using adaptive liquid lenses into two configurations: multiple adaptive liquid lenses, and a single adaptive liquid lens. The principles and configurations of these optical zoom imaging systems are introduced and represented. Three different working principles of the adaptive liquid lens (liquid crystal, polymer elastic membrane, and electrowetting effect) adopted in the optical zoom imaging systems are reviewed. Some representative applications of optical zooming imaging systems using adaptive liquid lenses are introduced. The opportunities and challenges of the optical zoom imaging systems using adaptive liquid lenses are also discussed. This review aims to provide a snapshot of the current state of this research field for attracting more attention to put forward the development of the next-generation optical zoom imaging systems.

Book ChapterDOI
29 Mar 2021
TL;DR: In this article, the authors investigate the flow-rate fairness of video conferencing congestion control at the example of Zoom and influences of deploying AQM. They find that Zoom is slow to react to bandwidth changes and uses two to three times the bandwidth of TCP in lowbandwidth scenarios.
Abstract: Congestion control is essential for the stability of the Internet and the corresponding algorithms are commonly evaluated for interoperability based on flow-rate fairness. In contrast, video conferencing software such as Zoom uses custom congestion control algorithms whose fairness behavior is mostly unknown. Aggravatingly, video conferencing has recently seen a drastic increase in use – partly caused by the COVID-19 pandemic – and could hence negatively affect how available Internet resources are shared. In this paper, we thus investigate the flow-rate fairness of video conferencing congestion control at the example of Zoom and influences of deploying AQM. We find that Zoom is slow to react to bandwidth changes and uses two to three times the bandwidth of TCP in low-bandwidth scenarios. Moreover, also when competing with delay aware congestion control such as BBR, we see high queuing delays. AQM reduces these queuing delays and can equalize the bandwidth use when used with flow-queuing. However, it then introduces high packet loss for Zoom, leaving the question how delay and loss affect Zoom’s QoE. We hence show a preliminary user study in the appendix which indicates that the QoE is at least not improved and should be studied further.

Journal ArticleDOI
TL;DR: Kyrix-S derives a declarative grammar that enables specification of a variety of SSVs in a few tens of lines of code, based on an existing survey of scatterplot tasks and designs, and is supported by a distributed layout algorithm which automatically places visual marks onto zoom levels.
Abstract: Static scatterplots often suffer from the overdraw problem on big datasets where object overlap causes undesirable visual clutter. The use of zooming in scatterplots can help alleviate this problem. With multiple zoom levels, more screen real estate is available, allowing objects to be placed in a less crowded way. We call this type of visualization scalable scatterplot visualizations, or SSV for short. Despite the potential of SSVs, existing systems and toolkits fall short in supporting the authoring of SSVs due to three limitations. First, many systems have limited scalability, assuming that data fits in the memory of one computer. Second, too much developer work, e.g., using custom code to generate mark layouts or render objects, is required. Third, many systems focus on only a small subset of the SSV design space (e.g. supporting a specific type of visual marks). To address these limitations, we have developed Kyrix-S, a system for easy authoring of SSVs at scale. Kyrix-S derives a declarative grammar that enables specification of a variety of SSVs in a few tens of lines of code, based on an existing survey of scatterplot tasks and designs. The declarative grammar is supported by a distributed layout algorithm which automatically places visual marks onto zoom levels. We store data in a multi-node database and use multi-node spatial indexes to achieve interactive browsing of large SSVs. Extensive experiments show that 1) Kyrix-S enables interactive browsing of SSVs of billions of objects, with response times under 500ms and 2) Kyrix-S achieves 4X-9X reduction in specification compared to a state-of-the-art authoring system.

Journal ArticleDOI
TL;DR: GenetIC, a new code for generating initial conditions for cosmological N-body simulations, allows precise, user-specified alterations to be made to arbitrary regions of the simulation (while maintaining consistency with the statistical ensemble).
Abstract: We present genetIC, a new code for generating initial conditions for cosmological N-body simulations The code allows precise, user-specified alterations to be made to arbitrary regions of the simulation (while maintaining consistency with the statistical ensemble) These "genetic modifications"allow, for example, the history, mass, or environment of a target halo to be altered in order to study the effect on their evolution The code natively supports initial conditions with nested zoom regions at progressively increasing resolution Modifications in the highresolution region must propagate self-consistently onto the lower-resolution grids; to enable this while maintaining a small memory footprint, we introduce a Fourier-space filtering approach to generating fields at variable resolution Due to a close correspondence with modifications, constrained initial conditions can also be produced by genetIC (for example, with the aim of matching structures in the local universe) We test the accuracy of modifications performed within zoom initial conditions The code achieves subpercent precision, which is easily sufficient for current applications in galaxy formation (Less)

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper conduct thorough security evaluations of the E2EE of Zoom (version 2.3.1) by analyzing their cryptographic protocols and discover several attacks more powerful than those expected by Zoom according to their whitepaper.
Abstract: In the wake of the global COVID-19 pandemic, video conference systems have become essential for not only business purposes, but also private, academic, and educational uses. Among the various systems, Zoom is the most widely deployed video conference system. In October 2020, Zoom Video Communications rolled out their end-to-end encryption (E2EE) to protect conversations in a meeting from even insiders, namely, the service provider Zoom. In this study, we conduct thorough security evaluations of the E2EE of Zoom (version 2.3.1) by analyzing their cryptographic protocols. We discover several attacks more powerful than those expected by Zoom according to their whitepaper. Specifically, if insiders collude with meeting participants, they can impersonate any Zoom user in target meetings, whereas Zoom indicates that they can impersonate only the current meeting participants. Besides, even without relying on malicious participants, insiders can impersonate any Zoom user in target meetings though they cannot decrypt meeting streams. In addition, we demonstrate several impersonation attacks by meeting participants or insiders colluding with meeting participants. Although these attacks may be beyond the scope of the security claims made by Zoom or may be already mentioned in the whitepaper, we reveal the details of the attack procedures and their feasibility in the real-world setting and propose effective countermeasures in this paper. Our findings are not an immediate threat to the E2EE of Zoom; however, we believe that these security evaluations are of value for deeply understanding the security of E2EE of Zoom.

Journal ArticleDOI
28 Jun 2021
TL;DR: In this paper, the authors describe best practices for using Zoom for remote learning, acknowledging technical considerations, and recommending workflows for designing and implementing virtual sessions. And they discuss the important role of cognitive learning theory and how to incorporate these key pedagogical insights into a successful virtual session.
Abstract: Virtual meeting platforms, such as Zoom, have become essential to medical education during the SARS-CoV-2 pandemic. However, many medical educators do not have experience planning or leading these sessions. Despite the prevalence of Zoom learning, there has been little published on best practices. In this article we describe best practices for using Zoom for remote learning, acknowledging technical considerations, and recommending workflows for designing and implementing virtual sessions. Furthermore, we discuss the important role of cognitive learning theory and how to incorporate these key pedagogical insights into a successful virtual session. While eventually in-person classrooms will open, virtual teaching will remain a component of medical education. If we utilize these inventive tools creatively and functionally, then virtual learning can augment and elevate the practice of medical education.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this paper, a linear 3 × 3 color transform that maps between these two observations is used to train a lightweight neural network comprising no more than 1460 parameters to predict the scene illumination.
Abstract: Most modern smartphones are now equipped with two rear-facing cameras – a main camera for standard imaging and an additional camera to provide wide-angle or telephoto zoom capabilities. In this paper, we leverage the availability of these two cameras for the task of illumination estimation using a small neural network to perform the illumination prediction. Specifically, if the two cameras’ sensors have different spectral sensitivities, the two images provide different spectral measurements of the physical scene. A linear 3 × 3 color transform that maps between these two observations – and that is unique to a given scene illuminant – can be used to train a lightweight neural network comprising no more than 1460 parameters to predict the scene illumination. We demonstrate that this two-camera approach with a lightweight network provides results on par or better than much more complicated illuminant estimation methods operating on a single image. We validate our method’s effectiveness through extensive experiments on radiometric data, a quasi-real two-camera dataset we generated from an existing single camera dataset, as well as a new real image dataset that we captured using a smartphone with two rear-facing cameras.

Journal ArticleDOI
TL;DR: This paper presents new external labeling methods that allow a user to navigate through dense sets of points of interest while keeping the current map extent fixed and presents a generic algorithmic framework that provides the possibility of expressing the different variants of interaction techniques as optimization problems in a unified way.
Abstract: Visualizing spatial data on small-screen devices such as smartphones and smartwatches poses new challenges in computational cartography. The current interfaces for map exploration require their users to zoom in and out frequently. Indeed, zooming and panning are tools suitable for choosing the map extent corresponding to an area of interest. They are not as suitable, however, for resolving the graphical clutter caused by a high feature density since zooming in to a large map scale leads to a loss of context. Therefore, in this paper, we present new external labeling methods that allow a user to navigate through dense sets of points of interest while keeping the current map extent fixed. We provide a unified model, in which labels are placed at the boundary of the map and visually associated with the corresponding features via connecting lines, which are called leaders. Since the screen space is limited, labeling all features at the same time is impractical. Therefore, at any time, we label a subset of the features. We offer interaction techniques to change the current selection of features systematically and, thus, give the user access to all features. We distinguish three methods, which allow the user either to slide the labels along the bottom side of the map or to browse the labels based on pages or stacks. We present a generic algorithmic framework that provides us with the possibility of expressing the different variants of interaction techniques as optimization problems in a unified way. We propose both exact algorithms and fast and simple heuristics that solve the optimization problems taking into account different criteria such as the ranking of the labels, the total leader length as well as the distance between leaders. In experiments on real-world data we evaluate these algorithms and discuss the three variants with respect to their strengths and weaknesses proving the flexibility of the presented algorithmic framework.