scispace - formally typeset
Search or ask a question

Showing papers on "Object (computer science) published in 2008"


Journal ArticleDOI
TL;DR: In this article, a large collection of images with ground truth labels is built to be used for object detection and recognition research, such data is useful for supervised learning and quantitative evaluation.
Abstract: We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web.

3,501 citations


Journal ArticleDOI
TL;DR: For certain classes that are particularly prevalent in the dataset, such as people, this work is able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.
Abstract: With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.

1,871 citations


01 Jan 2008
TL;DR: In this paper, a large dataset of 79,302,017 images collected from the Internet is used to explore the visual world with the aid of a variety of non-parametric methods.
Abstract: With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.

1,607 citations


Journal ArticleDOI
TL;DR: A novel method for detecting and localizing objects of a visual category in cluttered real-world scenes that is applicable to a range of different object categories, including both rigid and articulated objects and able to achieve competitive object detection performance from training sets that are between one and two orders of magnitude smaller than those used in comparable systems.
Abstract: This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems.

1,084 citations


Journal ArticleDOI
TL;DR: This work considers the problem of grasping novel objects, specifically objects that are being seen for the first time through vision, and presents a learning algorithm that neither requires nor tries to build a 3-d model of the object.
Abstract: We consider the problem of grasping novel objects, specifically objects that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Furthermore, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires nor tries to build a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained by means of supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers.

959 citations


Journal ArticleDOI
TL;DR: It is shown that long-term memory is capable of storing a massive number of objects with details from the image, and this results have implications for cognitive models, and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.
Abstract: One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.

875 citations


Journal ArticleDOI
01 Apr 2008
TL;DR: This paper explores how conceptual modeling could provide applications with direct support of trajectories (i.e. movement data that is structured into countable semantic units) as a first class concept and proposes two modeling approaches based on a design pattern and a dedicated data types.
Abstract: Analysis of trajectory data is the key to a growing number of applications aiming at global understanding and management of complex phenomena that involve moving objects (e.g. worldwide courier distribution, city traffic management, bird migration monitoring). Current DBMS support for such data is limited to the ability to store and query raw movement (i.e. the spatio-temporal position of an object). This paper explores how conceptual modeling could provide applications with direct support of trajectories (i.e. movement data that is structured into countable semantic units) as a first class concept. A specific concern is to allow enriching trajectories with semantic annotations allowing users to attach semantic data to specific parts of the trajectory. Building on a preliminary requirement analysis and an application example, the paper proposes two modeling approaches, one based on a design pattern, the other based on dedicated data types, and illustrates their differences in terms of implementation in an extended-relational context.

611 citations


Journal ArticleDOI
TL;DR: Research from the non-human primate and rat literature examining the anatomical basis of object recognition memory in the delayed nonmatching-to-sample (DNMS) and spontaneous object recognition (SOR) tasks, respectively, overwhelmingly favor the view that perirhinal cortex (PRh) is a critical region for object recognitionMemory.

545 citations


Journal ArticleDOI
TL;DR: This paper considers mesh partitioning and skeletonisation on a wide variety of meshes, and bases its algorithms on a volume-based shape-function called the shape-diameter-function (SDF), which remains largely oblivious to pose changes of the same object and maintains similar values in analogue parts of different objects.
Abstract: Mesh partitioning and skeletonisation are fundamental for many computer graphics and animation techniques. Because of the close link between an object’s skeleton and its boundary, these two problems are in many cases complementary. Any partitioning of the object can assist in the creation of a skeleton and any segmentation of the skeleton can infer a partitioning of the object. In this paper, we consider these two problems on a wide variety of meshes, and strive to construct partitioning and skeletons which remain consistent across a family of objects, not a single one. Such families can consist of either a single object in multiple poses and resolutions, or multiple objects which have a general common shape. To achieve consistency, we base our algorithms on a volume-based shape-function called the shape-diameter-function (SDF), which remains largely oblivious to pose changes of the same object and maintains similar values in analogue parts of different objects. The SDF is a scalar function defined on the mesh surface; however, it expresses a measure of the diameter of the object’s volume in the neighborhood of each point on the surface. Using the SDF we are able to process and manipulate families of objects which contain similarities using a simple and consistent algorithm: consistently partitioning and creating skeletons among multiple meshes.

519 citations


Journal ArticleDOI
01 Oct 2008
TL;DR: This work investigates the role of sparsity and localized features in a biologically-inspired model of visual object classification and demonstrates the value of retaining some position and scale information above the intermediate feature level.
Abstract: We investigate the role of sparsity and localized features in a biologically-inspired model of visual object classification. As in the model of Serre, Wolf, and Poggio, we first apply Gabor filters at all positions and scales; feature complexity and position/scale invariance are then built up by alternating template matching and max pooling operations. We refine the approach in several biologically plausible ways. Sparsity is increased by constraining the number of feature inputs, lateral inhibition, and feature selection. We also demonstrate the value of retaining some position and scale information above the intermediate feature level. Our final model is competitive with current computer vision algorithms on several standard datasets, including the Caltech 101 object categories and the UIUC car localization task. The results further the case for biologically-motivated approaches to object classification.

404 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: The proposed method provides a new higher-level layer to the traditional surveillance pipeline for anomalous event detection and scene model feedback and successfully used the proposed scene model to detect local as well as global anomalies in object tracks.
Abstract: We present a novel framework for learning patterns of motion and sizes of objects in static camera surveillance. The proposed method provides a new higher-level layer to the traditional surveillance pipeline for anomalous event detection and scene model feedback. Pixel level probability density functions (pdfs) of appearance have been used for background modelling in the past, but modelling pixel level pdfs of object speed and size from the tracks is novel. Each pdf is modelled as a multivariate Gaussian mixture model (GMM) of the motion (destination location & transition time) and the size (width & height) parameters of the objects at that location. Output of the tracking module is used to perform unsupervised EM-based learning of every GMM. We have successfully used the proposed scene model to detect local as well as global anomalies in object tracks. We also show the use of this scene model to improve object detection through pixel-level parameter feedback of the minimum object size and background learning rate. Most object path modelling approaches first cluster the tracks into major paths in the scene, which can be a source of error. We avoid this by building local pdfs that capture a variety of tracks which are passing through them. Qualitative and quantitative analysis of actual surveillance videos proved the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: A novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem is presented, formulated in a minimum description length hypothesis selection framework, which allows the system to recover from mismatches and temporarily lost tracks.
Abstract: We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. Our approach is formulated in a minimum description length hypothesis selection framework, which allows our system to recover from mismatches and temporarily lost tracks. Building upon a state-of-the-art object detector, it performs multiview/multicategory object recognition to detect cars and pedestrians in the input images. The 2D object detections are checked for their consistency with (automatically estimated) scene geometry and are converted to 3D observations which are accumulated in a world coordinate frame. A subsequent trajectory estimation module analyzes the resulting 3D observations to find physically plausible spacetime trajectories. Tracking is achieved by performing model selection after every frame. At each time instant, our approach searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far while satisfying the constraints that no two objects may occupy the same physical space nor explain the same image pixels at any point in time. Successful trajectory hypotheses are then fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. We evaluate our approach on several challenging video sequences and demonstrate its performance on both a surveillance-type scenario and a scenario where the input videos are taken from inside a moving vehicle passing through crowded city areas.

Patent
12 Dec 2008
TL;DR: In this article, the authors present a graphical user interface for editing on a portable multifunction device with a touch screen display, where the device detects a multitouch edit initiation gesture on the touch screen and displays a plurality of user-selectable edit option icons.
Abstract: Methods and graphical user interfaces for editing on a portable multifunction device with a touch screen display are disclosed. While displaying an application interface of an application, the device detects a multitouch edit initiation gesture on the touch screen display. In response to detection of the multitouch edit initiation gesture, the device displays a plurality of user-selectable edit option icons in an area of the touch screen display that is independent of a location of the multitouch edit initiation gesture. The device also displays a start point object and an end point object to select content displayed by the application in the application interface.

Patent
23 Apr 2008
TL;DR: In this paper, the system consists of an apparatus that includes a processor that is configured to capture an image of one or more objects and analyze data of the image to identify an object(s) of an image.
Abstract: Systems, methods, devices and computer program products which relate to utilizing a camera of a mobile terminal as a user interface for search applications and online services to perform visual searching are provided. The system consists of an apparatus that includes a processor that is configured to capture an image of one or more objects and analyze data of the image to identify an object(s) of the image. The processor is further configured to receive information that is associated with at least one object of the images and display the information that is associated with the image. In this regard, the apparatus is able to simplify access to location based services and improve a user's experience. The processor of the apparatus is configured to combine results of robust visual searches with online information resources to enhance location based services.

Book ChapterDOI
12 Oct 2008
TL;DR: This paper shows how to implement a star shape prior into graph cut segmentation, a generic shape prior that applies to a wide class of objects, in particular to convex objects, and shows that in many cases, it can achieve an accurate object segmentation with only a single pixel, the center of the object, provided by the user, which is rarely possible with standard graph cut interactive segmentation.
Abstract: In recent years, segmentation with graph cuts is increasingly used for a variety of applications, such as photo/video editing, medical image processing, etc. One of the most common applications of graph cut segmentation is extracting an object of interest from its background. If there is any knowledge about the object shape (i.e. a shape prior), incorporating this knowledge helps to achieve a more robust segmentation. In this paper, we show how to implement a star shape prior into graph cut segmentation. This is a generic shape prior, i.e. it is not specific to any particular object, but rather applies to a wide class of objects, in particular to convex objects. Our major assumption is that the center of the star shape is known, for example, it can be provided by the user. The star shape prior has an additional important benefit - it allows an inclusion of a term in the objective function which encourages a longer object boundary. This helps to alleviate the bias of a graph cut towards shorter segmentation boundaries. In fact, we show that in many cases, with this new term we can achieve an accurate object segmentation with only a single pixel, the center of the object, provided by the user, which is rarely possible with standard graph cut interactive segmentation.

Journal ArticleDOI
TL;DR: It is shown that, under mild assumptions and for large N, the occupancy measure converges, in mean square (and thus in probability) over any finite horizon, to a deterministic dynamical system.

Proceedings ArticleDOI
07 Apr 2008
TL;DR: An object's trajectory patterns which have ad-hoc forms for prediction are discovered and then indexed by a novel access method for efficient query processing, which estimates an object's future locations based on its pattern information as well as existing motion functions using the object's recent movements.
Abstract: Existing prediction methods in moving objects databases cannot forecast locations accurately if the query time is far away from the current time. Even for near future prediction, most techniques assume the trajectory of an object's movements can be represented by some mathematical formulas of motion functions based on its recent movements. However, an object's movements are more complicated than what the mathematical formulas can represent. Prediction based on an object's trajectory patterns is a powerful way and has been investigated by several work. But their main interest is how to discover the patterns. In this paper, we present a novel prediction approach, namely The Hybrid Prediction Model, which estimates an object's future locations based on its pattern information as well as existing motion functions using the object's recent movements. Specifically, an object's trajectory patterns which have ad-hoc forms for prediction are discovered and then indexed by a novel access method for efficient query processing. In addition, two query processing techniques that can provide accurate results for both near and distant time predictive queries are presented. Our extensive experiments demonstrate that proposed techniques are more accurate and efficient than existing forecasting schemes.

Patent
07 Jan 2008
TL;DR: In this article, an object of interest can be identified by monitoring a user activity or inactivity with regard to a displayed map, if the user hovers a pointing device over an object within the displayed map for longer than a predetermined amount of time.
Abstract: Provided is a single repository for capturing, connecting, sharing, and visualizing information based on a geographic location, for example. Detailed information about a structure or other object information can be displayed as mode information. An object of interest can be identified by monitoring a user activity or inactivity with regard to a displayed map. If the user hovers a pointing device over an object within the displayed map for longer than a predetermined amount of time, it can be inferred that the user should be presented with additional information regarding the object.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: This work proposes to group visual objects using a multi-layer hierarchy tree that is based on common visual elements by adapting to the visual domain the generative hierarchical latent Dirichlet allocation (hLDA) model previously used for unsupervised discovery of topic hierarchies in text.
Abstract: Objects in the world can be arranged into a hierarchy based on their semantic meaning (e.g. organism - animal - feline - cat). What about defining a hierarchy based on the visual appearance of objects? This paper investigates ways to automatically discover a hierarchical structure for the visual world from a collection of unlabeled images. Previous approaches for unsupervised object and scene discovery focused on partitioning the visual data into a set of non-overlapping classes of equal granularity. In this work, we propose to group visual objects using a multi-layer hierarchy tree that is based on common visual elements. This is achieved by adapting to the visual domain the generative hierarchical latent Dirichlet allocation (hLDA) model previously used for unsupervised discovery of topic hierarchies in text. Images are modeled using quantized local image regions as analogues to words in text. Employing the multiple segmentation framework of Russell et al. [22], we show that meaningful object hierarchies, together with object segmentations, can be automatically learned from unlabeled and unsegmented image collections without supervision. We demonstrate improved object classification and localization performance using hLDA over the previous non-hierarchical method on the MSRC dataset [33].

Patent
14 Jan 2008
TL;DR: In this article, a real-time computer vision system tracks one or more objects moving in a scene using a target location technique which does not involve searching, which consists of the low-level image grabbing software and a tracking algorithm.
Abstract: A real-time computer vision system tracks one or more objects moving in a scene using a target location technique which does not involve searching. The imaging hardware includes a color camera, frame grabber and processor. The software consists of the low-level image grabbing software and a tracking algorithm. The system tracks objects based on the color, motion and/or shape of the object in the image. A color matching function is used to compute three measures of the target's probable location based on the target color, shape and motion. The method then computes the most probable location of the target using a weighting technique. Once the system is running, a graphical user interface displays the live image from the color camera on the computer screen. The operator can then use the mouse to select a target for tracking. The system will then keep track of the moving target in the scene in real-time.

Journal ArticleDOI
TL;DR: This study used a masking paradigm to measure the efficiency of encoding, and neurophysiological recordings to directly measure visual working memory maintenance while subjects viewed multifeatures and were required to remember only a single feature or all of the features of the objects.
Abstract: It has been shown that we have a highly capacity-limited representational space with which to store objects in visual working memory. However, most objects are composed of multiple feature attributes, and it is unknown whether observers can voluntarily store a single attribute of an object without necessarily storing all of its remaining features. In this study, we used a masking paradigm to measure the efficiency of encoding, and neurophysiological recordings to directly measure visual working memory maintenance while subjects viewed multifeature objects and were required to remember only a single feature or all of the features of the objects. We found that measures of both encoding and maintenance varied systematically as a function of which object features were task relevant. These experiments show that individuals can control which features of an object are selectively stored in working memory.

Journal ArticleDOI
TL;DR: A Laplacian media object space is constructed for media object representation of each modality and an MMD semantic graph is constructed to perform cross-media retrieval and different methods are proposed to utilize relevance feedback.
Abstract: In this paper, we consider the problem of multimedia document (MMD) semantics understanding and content-based cross-media retrieval. An MMD is a set of media objects of different modalities but carrying the same semantics and the content-based cross-media retrieval is a new kind of retrieval method by which the query examples and search results can be of different modalities. Two levels of manifolds are learned to explore the relationships among all the data in the level of MMD and in the level of media object respectively. We first construct a Laplacian media object space for media object representation of each modality and an MMD semantic graph to learn the MMD semantic correlations. The characteristics of media objects propagate along the MMD semantic graph and an MMD semantic space is constructed to perform cross-media retrieval. Different methods are proposed to utilize relevance feedback and experiment shows that the proposed approaches are effective.

Patent
14 Apr 2008
TL;DR: In this article, a user annotates a shared document with text, sound, images, video, an e-mail message, graphics, screen snapshots, web site snapshots to share with others.
Abstract: A user annotates a shared document with text, sound, images, video, an e-mail message, graphics, screen snapshots, web site snapshots to share with others. The document and its annotations are stored in a digital object repository to which other users have access. Within the closed collaboration system, only users who are authenticated may upload digital objects, annotate digital objects and view objects and their annotations. The user sends a message to other users to invite them to view the object and its annotations and to add their own annotations. An annotated object generates an alert for all of the invited users. A remote authentication gateway authenticates users and has a repository for user metadata. Digital object repositories are separate from the authentication gateway, thus providing for disintermediation of the user metadata from the digital object data. The collaboration system may be hosted by a third party on a server computer available over the Internet that displays a web site. A user is not required to have collaboration system software on his or her computer and may annotate any image on the web site for later viewing by other users of the web site.

Patent
George Fitzmaurice1, Justin Matejka1, Igor Mordatch1, Gord Kurtenbach1, Azam Khan1 
28 Aug 2008
TL;DR: In this paper, the authors present a navigation system for 3D scenes that includes a model or object with which a user can interact, and a set of mini navigation wheels for experienced users that include all of the function of the larger wheels in pie shaped wedges.
Abstract: A navigation system for navigating a three-dimensional (3D) scene that includes a model or object with which a user can interact. The system accommodates and helps both novice and advanced users. To do this, the system provides a set of mini navigation wheels for experienced users that include all of the function of the larger wheels in pie shaped wedges and that acts as a cursor.

Patent
James Michael Ferris1
26 Nov 2008
TL;DR: In this paper, the authors describe a system and methods for embedding a cloud-based resource request in a specification language wrapper, such as an XML object, which can be transmitted to a marketplace to seek the response of available clouds which can support the application or appliance according to the specifications contained in the specification language wrappers.
Abstract: Embodiments relate to systems and methods for embedding a cloud-based resource request in a specification language wrapper In embodiments, a set of applications and/or a set of appliances can be registered to be instantiated in a cloud-based network Each application or appliance can have an associated set of specified resources with which the user wishes to instantiate those objects For example, a user may specify a maximum latency for input/output of the application or appliance, a geographic location of the supporting cloud resources, a processor throughput, or other resource specification to instantiate the desired object According to embodiments, the set of requested resources can be embedded in a specification language wrapper, such as an XML object The specification language wrapper can be transmitted to a marketplace to seek the response of available clouds which can support the application or appliance according to the specifications contained in the specification language wrapper

Journal ArticleDOI
TL;DR: This work develops hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them and proposes nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene.
Abstract: We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.

Patent
Sun Jian1
10 Sep 2008
TL;DR: In this paper, a method for associating objects in an electronic device (100) was proposed. But the method was not suitable for the case where one of the first and second areas of the touch sensitive user interface (170) is a touch sensitive display screen (105) and the other area of the keypad (165).
Abstract: A method (400) of associating objects in an electronic device (100), the method (400) performs identifying a first object (420) in response to detecting an initial contact of a scribed stroke (410) at a location of a first area of a touch sensitive user interface (170) which corresponds with the first object. Next there is performed identifying a second object (455) in response to detecting a final contact (450) of the scribed stroke at a location of a second area of the touch sensitive user interface (170) which corresponds with the second object. Then the method (400) performs associating the first object with the second object (460) and wherein one of the first and second areas of the touch sensitive user interface (170) is a touch sensitive display screen (105) and the other area of the touch sensitive user interface (170) is a touch sensitive keypad (165).

Patent
18 Nov 2008
TL;DR: In this paper, a method is described for modifying a predetermined object pattern to correct for geometric distortion of a pattern generator, and generating the modified pattern using the pattern generator using a reactive material to form the portion of the three-dimensional object defined by the predetermined object patterns.
Abstract: A method includes receiving a predetermined object pattern representing a portion of a three-dimensional object, modifying the predetermined object pattern to correct for geometric distortion of a pattern generator, and generating the modified pattern using the pattern generator. The generated pattern interacts with a reactive material to form the portion of the three-dimensional object defined by the predetermined object pattern.


Journal ArticleDOI
TL;DR: Object words oriented attention to and activated perceptual simulations in the objects' typical locations, which shed new light on how language affects perception.
Abstract: Many objects typically occur in particular locations, and object words encode these spatial associations. We tested whether such object words (e.g., head, foot) orient attention toward the location where the denoted object typically occurs (i.e., up, down). Because object words elicit perceptual simulations of the denoted objects (i.e., the representations acquired during actual perception are reactivated), we predicted that an object word would interfere with identification of an unrelated visual target subsequently presented in the object's typical location. Consistent with this prediction, three experiments demonstrated that words denoting objects that typically occur high in the visual field hindered identification of targets appearing at the top of the display, whereas words denoting low objects hindered target identification at the bottom of the display. Thus, object words oriented attention to and activated perceptual simulations in the objects' typical locations. These results shed new light on how language affects perception.