scispace - formally typeset
Search or ask a question

Showing papers on "Zoom published in 2018"


Journal ArticleDOI
TL;DR: This work introduces Juicebox.js, a cloud-based web application for exploring the resulting datasets of contact mapping experiments such as Hi-C, which makes every step from raw reads to published figure is publicly available as open source code.
Abstract: Contact mapping experiments such as Hi-C explore how genomes fold in 3D. Here, we introduce Juicebox.js, a cloud-based web application for exploring the resulting datasets. Like the original Juicebox application, Juicebox.js allows users to zoom in and out of such datasets using an interface similar to Google Earth. Juicebox.js also has many features designed to facilitate data reproducibility and sharing. Furthermore, Juicebox.js encodes the exact state of the browser in a shareable URL. Creating a public browser for a new Hi-C dataset does not require coding and can be accomplished in under a minute. The web app also makes it possible to create interactive figures online that can complement or replace ordinary journal figures. When combined with Juicer, this makes the entire process of data analysis transparent, insofar as every step from raw reads to published figure is publicly available as open source code.

227 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: A generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images is introduced.
Abstract: We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.

114 citations


Journal ArticleDOI
TL;DR: In this article, the authors use YZiCS, a hydrodynamic high-resolution zoom-in simulation of 15 clusters, and focus on the tidal stripping suffered by the dark matter halos of cluster members due to preprocessing.
Abstract: To understand the galaxy population in clusters today, we should also consider the impact of previous environments prior to cluster infall, namely preprocessing. We use YZiCS, a hydrodynamic high-resolution zoom-in simulation of 15 clusters, and focus on the tidal stripping suffered by the dark matter halos of cluster members due to preprocessing. We find ~48% of today's cluster members were once satellites of other hosts. This is slightly higher than previous estimates, in part, because we consider not just group-mass hosts, but hosts of all masses also. Thus, we find the preprocessed fraction is poorly correlated with cluster mass and is instead related to each cluster's recent mass growth rate. Hosts less massive than groups are significant contributors, providing more than one-third of the total preprocessed fraction. We find that halo mass loss is a clear function of the time spent in hosts. However, two factors can increase the mass loss rate considerably; the mass ratio of a satellite to its host, and the cosmological epoch when the satellite was hosted. The latter means we may have previously underestimated the role of high redshift groups. From a sample of heavily tidally stripped members in clusters today, nearly three quarters were previously in a host. Thus, visibly disturbed cluster members are more likely to have experienced preprocessing. Being hosted before cluster infall enables cluster members to experience tidal stripping for extended durations compared to direct cluster infall and at earlier epochs when hosts were more destructive.

39 citations


Book ChapterDOI
08 Oct 2018
TL;DR: A new approach enabling the dynamic exploration of summaries through two novel operations zoom and extend is presented, both providing granular information access to the end-user.
Abstract: Ontology summarization aspires to produce an abridged version of the original data source highlighting its most important concepts. However, in an ideal scenario, the user should not be limited only to static summaries. Starting from the summary, s/he should be able to further explore the data source requesting more detailed information for a particular part of it. In this paper, we present a new approach enabling the dynamic exploration of summaries through two novel operations zoom and extend. Extend focuses on a specific subgraph of the initial summary, whereas zoom on the whole graph, both providing granular information access to the end-user. We show that calculating these operators is NP-complete and provide approximations for their calculation. Then, we show that using extend, we can answer more queries focusing on specific nodes, whereas using global zoom, we can answer overall more queries. Finally, we show that the algorithms employed can efficiently approximate both operators.

29 citations


Journal ArticleDOI
TL;DR: It was found that females scanned more than males, and age was positively correlated with scanning percentage, while the facility size was negatively correlated, and the scanning percentage was not predictive of diagnostic accuracy.
Abstract: Following a baseline demographic survey, 87 pathologists interpreted 240 digital whole slide images of breast biopsy specimens representing a range of diagnostic categories from benign to atypia, ductal carcinoma in situ, and invasive cancer. A web-based viewer recorded pathologists’ behaviors while interpreting a subset of 60 randomly selected and randomly ordered slides. To characterize diagnostic search patterns, we used the viewport location, time stamp, and zoom level data to calculate four variables: average zoom level, maximum zoom level, zoom level variance, and scanning percentage. Two distinct search strategies were confirmed: scanning is characterized by panning at a constant zoom level, while drilling involves zooming in and out at various locations. Statistical analysis was applied to examine the associations of different visual interpretive strategies with pathologist characteristics, diagnostic accuracy, and efficiency. We found that females scanned more than males, and age was positively correlated with scanning percentage, while the facility size was negatively correlated. Throughout 60 cases, the scanning percentage and total interpretation time per slide decreased, and these two variables were positively correlated. The scanning percentage was not predictive of diagnostic accuracy. Increasing average zoom level, maximum zoom level, and zoom variance were correlated with over-interpretation.

27 citations


Proceedings ArticleDOI
01 May 2018
TL;DR: A new provably correct reactive coverage control algorithm for PTZ camera networks that continuously configures camera orientations and zoom levels (i.e., angles of view) in order to locally maximize their total coverage quality is proposed.
Abstract: A challenge of pan/tilt/zoom (PTZ) camera networks for efficient and flexible visual monitoring is automated active network reconfiguration in response to environmental stimuli. In this paper, given an event/activity distribution over a convex environment, we propose a new provably correct reactive coverage control algorithm for PTZ camera networks that continuously (re) configures camera orientations and zoom levels (i.e., angles of view) in order to locally maximize their total coverage quality. Our construction is based on careful modeling of visual sensing quality that is consistent with the physical nature of cameras, and we introduce a new notion of conic Voronoi diagrams, based on our sensing quality measures, to solve the camera network allocation problem: that is, to determine where each camera should focus in its field of view given all the other cameras' configurations. Accordingly, we design simple greedy gradient algorithms for both continuous-and discrete-time first-order PTZ camera dynamics that asymptotically converge a locally optimal coverage configuration. Finally, we provide numerical and experimental evidence demonstrating the effectiveness of the proposed coverage algorithms.

25 citations


Journal ArticleDOI
TL;DR: An 8x four-group zoom lens system for a compact camera without any moving groups by employing a focus tunable lens (FTL) and locating the FTLs at the second and fourth groups as a variator and a compensator.
Abstract: We present an 8x four-group zoom lens system for a compact camera without any moving groups by employing a focus tunable lens (FTL). We locate the FTLs at the second and fourth groups as a variator and a compensator. In the initial design stage, paraxial analysis for the zoom position was numerically determined by examining the solutions for various first group and third group powers, to achieve a physically meaningful and compact zoom system at a zoom ratio of 8x. The designed zoom lens has focal lengths of 4-31 mm and the apertures of F/3.5 to F/4.5 at wide and tele positions, respectively.

23 citations


Journal ArticleDOI
TL;DR: This work formalizes a forward projection model and considers projection geometry cues to improve a metric reconstruction methodology for a calibrated standard plenoptic camera to evaluate the depth estimation accuracy under certain zoom and focus settings.

18 citations


Journal ArticleDOI
28 Dec 2018-Elearn
TL;DR: Zoom has become a robust, indispensable and reliable video conferencing tool for the way the authors work, teach and learn together.
Abstract: Zoom has become a robust, indispensable and reliable video conferencing tool for the way we work, teach and learn together. When we create a positive social learning environment with supportive faculty and student relationships, we are able to retain our online students. Zoom connects easily across room systems, desktops and mobile devices to seamlessly bring together our various campus sites and long-distance participants. Utilizing the numerous features of zoom creates an authentic online teaching environment.

17 citations


Posted Content
TL;DR: This work proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet, designed for state-of-the-art lane detection in an unordered crowdsourced image dataset, and introduced TTLane, a dataset for efficientlane detection in urban road modeling applications.
Abstract: High Definition (HD) maps play an important role in modern traffic scenes However, the development of HD maps coverage grows slowly because of the cost limitation To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time And the maps can be constructed precisely even with inaccurate crowdsourced data

16 citations


Posted Content
TL;DR: Guided Zoom improves the classification accuracy of a deep convolutional neural network model and obtains state-of-the-art results on three fine-grained classification benchmark datasets.
Abstract: We propose Guided Zoom, an approach that utilizes spatial grounding of a model's decision to make more informed predictions. It does so by making sure the model has "the right reasons" for a prediction, defined as reasons that are coherent with those used to make similar correct decisions at training time. The reason/evidence upon which a deep convolutional neural network makes a prediction is defined to be the spatial grounding, in the pixel space, for a specific class conditional probability in the model output. Guided Zoom examines how reasonable such evidence is for each of the top-k predicted classes, rather than solely trusting the top-1 prediction. We show that Guided Zoom improves the classification accuracy of a deep convolutional neural network model and obtains state-of-the-art results on three fine-grained classification benchmark datasets.

Proceedings ArticleDOI
04 Oct 2018
TL;DR: This work presents an image processing pipeline which is capable of tracking very small point targets in an overview camera, adjusting a tilting unit with a mounted zoom camera (PTZ system) to locations of interest and classifying the spotted object in this more detailed camera view.
Abstract: The number of affordable consumer unmanned aerial vehicles (UAVs) available on the market has been growing quickly in recent years. Uncontrolled use of such UAVs in the context of public events like sports events or demonstrations, as well as their use near sensitive areas, such as airports or correctional facilities pose a potential security threat. Automatic early detection of UAVs is thus an important task which can be addressed through multiple modalities, such as visual imagery, radar, audio signals, or UAV control signals. In this work we present an image processing pipeline which is capable of tracking very small point targets in an overview camera, adjusting a tilting unit with a mounted zoom camera (PTZ system) to locations of interest and classifying the spotted object in this more detailed camera view. The overview camera is a high-resolution camera with a wide field of view. Its main purpose is to monitor a wide area and to allow an early detection of candidates, whose motion or appearance warrant a closer investigation. In a subsequent process these candidates are prioritized and successively examined by adapting the orientation of the tilting unit and the zoom level of the attached camera lens, to be able to observe the target in detail and provide appropriate data for the classification stage. The image of the PTZ camera is then used to classify the object into either UAV class or distractor class. For this task we apply the popular SSD detector. Several parameters of the detector have been adapted for the task of UAV detection and classification. We demonstrate the performance of the full pipeline on imagery collected by the system. The data contains actual UAVs as well as distractors, such as birds.

Patent
12 Jan 2018
TL;DR: In this paper, an interface adjustment method and device and a terminal relates to the field of human-computer interaction, and includes the steps of displaying a user interface in a display area of a display screen; detecting whether or not touch operation on the display screen meets a first preset condition, wherein the first preset conditions refers to the condition of triggering the operation of zooming the user interface out.
Abstract: The invention discloses an interface adjustment method and device and a terminal, and relates to the field of human-computer interaction. The method includes the steps of displaying a user interface in a display area of a display screen; detecting whether or not touch operation on the display screen meets a first preset condition, wherein the first preset condition refers to the condition of triggering the operation of zooming the user interface out; when the touch operation meets the first preset condition, displaying the user interface which is zoomed out, wherein the user interface which iszoomed out is completely displayed in the display area of the display screen. In the method, by zooming the user interface out, the user interface which is zoomed out is displayed in the display areaof the display screen, and the problem is solved that when the user interface is different from the display screen in shape, the user interface cannot be completely displayed in the display area of the display screen.

Journal ArticleDOI
TL;DR: This study designed an actuated tangible tabletop interface, called BotMap, allowing the exploration of geographic data through non-visual panning and zooming, and observed three VI people using the system and performing a classical task consisting in finding the more appropriate itinerary for a journey.
Abstract: The development of novel shape-changing or actuated tabletop tangible interfaces opens new perspectives for the design of physical and dynamic maps, especially for visually impaired (VI) users Such maps would allow non-visual haptic exploration with advanced functions, such as panning and zooming In this study, we designed an actuated tangible tabletop interface, called BotMap, allowing the exploration of geographic data through non-visual panning and zooming In BotMap, small robots represent landmarks and move to their correct position whenever the map is refreshed Users can interact with the robots to retrieve the names of the landmarks they represent We designed two interfaces, named Keyboard and Sliders, which enable users to pan and zoom Two evaluations were conducted with, respectively, ten blindfolded and eight VI participants Results show that both interfaces were usable, with a slight advantage for the Keyboard interface in terms of navigation performance and map comprehension, and that, even when many panning and zooming operations were required, VI participants were able to understand the maps Most participants managed to accurately reconstruct maps after exploration Finally, we observed three VI people using the system and performing a classical task consisting in finding the more appropriate itinerary for a journey

Book ChapterDOI
20 Jun 2018
TL;DR: A method for locating and recognizing hand gestures from images, based on Deep Learning, to provide an intuitive and accessible way to interact with Computer Vision-based mobile applications aimed to assist visually impaired people.
Abstract: In this paper, we present a method for locating and recognizing hand gestures from images, based on Deep Learning. Our goal is to provide an intuitive and accessible way to interact with Computer Vision-based mobile applications aimed to assist visually impaired people (e.g. pointing a finger at an object in a real scene to zoom in for a close-up of the pointed object). Initially, we have defined different hand gestures that can be assigned to different actions. After that, we have created a database containing images corresponding to these gestures. Lastly, this database has been used to train Neural Networks with different topologies (testing different input sizes, weight initialization, and data augmentation process). In our experiments, we have obtained high accuracies both in localization (96%–100%) and in recognition (99.45%) with Networks that are appropriate to be ported to mobile devices.

Patent
07 Jun 2018
TL;DR: In this paper, a method for displaying preview images is disclosed, which includes receiving first images captured by a first camera having a first field-of-view (FOV) and receiving second images captured from a second camera with a second FOV that is different than the first FOV.
Abstract: A method for displaying preview images is disclosed. In one aspect, the method includes: receiving first images captured by a first camera having a first field-of-view (FOV), receiving second images captured by a second camera having a second FOV that is different than the first FOV, and displaying preview images generated based on the first and second images. The method may further include determining a spatial transform based on depth information associated with individual pixels in the first and second images, and upon receiving instructions to zoom in or out beyond a camera switching threshold, modifying the second image using the spatial transform and displaying the first image and the modified second image consecutively.

Patent
11 May 2018
TL;DR: In this paper, a user interface for operating a dual-aperture digital camera included in host device is presented, consisting of a screen configured to display at least one icon and an image of a scene acquired with at least two cameras, a frame defining a field of view of a Tele image, the frame superposed on a Wide image having a Wide field of views, and means to switch the screen from displaying the Wide image to displaying the Tele image and vice versa.
Abstract: A user interface for operating a dual-aperture digital camera included in host device, the dual-aperture digital camera including a Wide camera and a Tele camera, the user interface comprising a screen configured to display at least one icon and an image of a scene acquired with at least one of the Tele and Wide cameras, a frame defining a field of view of a Tele image, the frame superposed on a Wide image having a Wide field of view, and means to switch the screen from displaying the Wide image to displaying the Tele image and vice versa.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed method can not only generate images in higher quality, but also satisfy the requirement of real-time video super-solution.

Journal ArticleDOI
TL;DR: Paraxial analysis of special types of zoom lenses, which are composed of four members with variable focal length, represent a completely new family of zoom optical systems with applications in measuring systems in photogrammetry, computer vision, triangulation sensors, fringe projection techniques, surveying, machine vision, and so forth.
Abstract: The paper presents paraxial analysis of special types of zoom lenses, which are composed of four members with variable focal length The position of optical center of these systems is required to be fixed for a given value of focal length (ie, the position of the optical center does not depend on object distance for given value of focal length of the zoom) The formulas that enable the calculation of the optical powers of individual members of such a zoom lens are derived from--and the practical application of the derived formulas is demonstrated with--an example Such optical systems represent a completely new family of zoom optical systems with applications in measuring systems in photogrammetry, computer vision, triangulation sensors, fringe projection techniques, surveying, machine vision, and so forth

Proceedings ArticleDOI
01 Jan 2018
TL;DR: In this article, the authors present an algorithm for assigning nodes to zoom levels that minimizes the change in the number of nodes on visible on the screen while the user zooms in and out between the levels.
Abstract: A GraphMaps is a system that visualizes a graph using zoom levels, which is similar to a geographic map visualization. GraphMaps reveals the structural properties of the graph and enables users to explore the graph in a natural way by using the standard zoom and pan operations. The available implementation of GraphMaps faces many challenges such as the number of zoom levels may be large, nodes may be unevenly distributed to different levels, shared edges may create ambiguity due to the selection of multiple nodes. In this paper, we develop an algorithmic framework to construct GraphMaps from any given mesh (generated from a 2D point set), and for any given number of zoom levels. We demonstrate our approach introducing competition mesh, which is simple to construct, has a low dilation and high angular resolution. We present an algorithm for assigning nodes to zoom levels that minimizes the change in the number of nodes on visible on the screen while the user zooms in and out between the levels. We think that keeping this change small facilitates smooth browsing of the graph. We also propose new node selection techniques to cope with some of the challenges of the GraphMaps approach.

Patent
26 Jan 2018
TL;DR: A finite conjugate optical assembly as discussed by the authors consists of a lens with a core zoom module that includes five optical groups that are configured to provide at least a 5.5: 1 focal zoom.
Abstract: A finite conjugate optical assembly, comprising a lens with a core zoom module that includes five optical groups that are configured to provide at least a 5.5: 1 a focal zoom. A lens attachment module and rear adapter module may be disposed, respectively, on an object side and an image side of the optical assembly. The optical assembly may exhibit an approximate etendue between 0.95 and 4.65 mm 2 sr, and may be configured in accordance with between 6.6MP and 32MP image sensors.

Journal ArticleDOI
TL;DR: A context-aware design pattern for situated analytics called Blended Model View Controller that allows common user interface controls to work in tandem with printed information on a physical object by adapting the operation and presentation based on a semantic matrix is presented.
Abstract: This paper presents a context-aware design pattern for situated analytics called Blended Model View Controller. Our approach is an event-driven design, allowing a seamless transition between the physical space and information space during use. The Blended Model View Controller allows common user interface controls to work in tandem with printed information on a physical object by adapting the operation and presentation based on a semantic matrix. Also presented is an authoring tool that has been developed to assign the parameters of the semantic matrix. We demonstrate the use of the design pattern with a set of augmented reality interactions including; pinch zoom, menus, and details-on-demand. We analyse each control to highlight how the physical and virtual information spaces work in tandem to provide a rich interaction environment in augmented reality.

Patent
Dong Jin Park1, Seok Kang1, Jee Hong Lee1, Joon Hyuk Im1, Chae Sung Kim1 
24 Jan 2018
TL;DR: In this paper, a camera switching input includes a zoom factor signal, and the virtual viewpoint images interpolate a disparity between the pre-transition image and the post-transitional image caused by the different cameras being located at different positions, resulting in a smooth visual transition.
Abstract: A digital photographing device may include a plurality of cameras on a common side of the device, an application processor for switching image capture between the camera, and a display. The application processor may switch images output on the display when the cameras are switched. During the image transition, one or more virtual viewpoint images are output between a pre-transition image and a post-transition image. The virtual viewpoint images interpolate a disparity between the pre-transition image and the post-transition image caused by the different cameras being located at different positions, and result in a smooth visual transition. When a camera switching input includes a zoom factor signal, the virtual viewpoint images may be compensated images according to the input zoom factor and a disparity.

Patent
17 Feb 2018
TL;DR: In this article, the authors describe a system for enabling a consumer of streaming video to obtain different views of the video, such as zoomed views of one or more objects of interest.
Abstract: Systems and methods are described for enabling a consumer of streaming video to obtain different views of the video, such as zoomed views of one or more objects of interest. In an exemplary embodiment, a client device receives an original video stream along with data identifying objects of interest and their spatial locations within the original video. In one embodiment, in response to user selection of an object of interest, the client device switches to display of a cropped and scaled version of the original video to present a zoomed video of the object of interest. The zoomed video tracks the selected object even as the position of the selected object changes with respect to the original video. In some embodiments, the object of interest and the appropriate zoom factor are both selected with a single expanding-pinch gesture on a touch screen.

Proceedings ArticleDOI
22 Jul 2018
TL;DR: An approach and workflow in order to detect humans in the environment around a crane with Monocular Images by generating the needed data with a photorealistic network is proposed.
Abstract: In this paper, we propose an approach and workflow in order to detect humans in the environment around a crane with Monocular Images. The considered area is split up into a zone around the crane truck and one around the load. The load will be monitored with an optical zoom camera where we can control the zoom. We discretize the zoom levels and a Convolutional Neural Network for each zoom level is trained. Afterwards a Meta Convolutional Neural Network is trained in order to select the next zoom level. Since there are no public datasets available for this kind of task we propose to generate the needed data with a photorealistic

Journal ArticleDOI
01 Mar 2018-Optik
TL;DR: The experimental results prove that the proposed system can achieve the multi-plane AR holographic 3D display effect without any image bearing structure, and can solve the accommodation-vergence conflict problem effectively.

Journal ArticleDOI
TL;DR: In this paper, the authors presented an experimental proof of concept of a programmable optical zoom lens system with no moving parts that can form images with both positive and negative magnifications.
Abstract: In this work we present an experimental proof of concept of a programmable optical zoom lens system with no moving parts that can form images with both positive and negative magnifications. Our system uses two programmable liquid crystal spatial light modulators to form the lenses composing the zoom system. The results included show that images can be formed with both positive and negative magnifications. Experimental results match the theory. We discuss the size limitations of this system caused by the limited spatial resolution and discuss how newer devices would shrink the size of the system.

Patent
21 Aug 2018
TL;DR: In this paper, a user can quickly and easily navigate to different levels of detail of content by providing a zoom input, which can be used to filter and/or select content for presentation.
Abstract: Approaches provide for navigating or otherwise interacting with content in response to input from a user, including voice inputs, device inputs, gesture inputs, among other such inputs such that a user can quickly and easily navigate to different levels of detail of content This can include, for example, presenting content (eg, images, multimedia, text, etc) in a particular layout, and/or highlighting, emphasizing, animating, or otherwise altering in appearance, and/or arrangement of the interface elements used to present the content based on a current level of detail, where the current level of detail can be determined by data selection criteria associated with a magnification level and other such data As a user interacts with the computing device, for example, by providing a zoom input, values of the selection criteria can be updated, which can be used to filter and/or select content for presentation

Patent
02 Feb 2018
TL;DR: In this paper, a liquid device based holographic zoom system was proposed, which solved a technical problem that the existing holographic display system is complex in structure, difficult in operation and poor in quality of a reproductive image.
Abstract: The invention relates to a liquid device based holographic zoom system, which mainly solves a technical problem that the existing holographic display system is complex in structure, difficult in operation and poor in quality of a reproductive image. The technical scheme adopted by the invention is that the liquid device based holographic zoom system comprises a collimating light source, an SLM (Spatial Light Modulator), a liquid lens, a liquid diaphragm and a receiving screen, the collimating light source is arranged on an incident light path of the SLM, the liquid lens is arranged behind theSLM, the liquid diaphragm is arranged behind the liquid lens, and the receiving screen is arranged behind the liquid diaphragm. According to the liquid device based holographic zoom system, the SLM iscoded with a Fresnel lens, and the size of the reproductive image can be changed under the premise of not changing the position of the system elements through changing the focal length of the Fresnellens and the liquid lens. In order to acquire a high-quality holographic reproductive image on the receiving screen, the liquid diaphragm is adopted to eliminate a high-order diffraction image in theholographic zoom system, and thus high-quality holographic zoom display is achieved.

Book ChapterDOI
01 Jan 2018
TL;DR: Inkscape is a Scalable Vector Graphics (SVG) editor that edits raster images made up of a bunch of dots so that the lines you see on the page are really a series of small dots.
Abstract: Inkscape is a Scalable Vector Graphics (SVG) editor. Most images you see on a computer are raster images made up of a bunch of dots. In other words, the lines you see on the page are really a series of small dots. If you zoom a lot, you’ll see a rough line consisting of a bunch of big dots.