scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer Graphics and Applications in 2005"


Journal ArticleDOI
TL;DR: The NavShoe device provides not only robust approximate position, but also an extremely accurate orientation tracker on the foot, which can greatly reduce the database search space for computer vision, making it much simpler and more robust.
Abstract: A navigation system that tracks the location of a person on foot is useful for finding and rescuing firefighters or other emergency first responders, or for location-aware computing, personal navigation assistance, mobile 3D audio, and mixed or augmented reality applications. One of the main obstacles to the real-world deployment of location-sensitive wearable computing, including mixed reality (MR), is that current position-tracking technologies require an instrumented, marked, or premapped environment. At InterSense, we've developed a system called NavShoe, which uses a new approach to position tracking based on inertial sensing. Our wireless inertial sensor is small enough to easily tuck into the shoelaces, and sufficiently low power to run all day on a small battery. Although it can't be used alone for precise registration of close-range objects, in outdoor applications augmenting distant objects, a user would barely notice the NavShoe's meter-level error combined with any error in the head's assumed location relative to the foot. NavShoe can greatly reduce the database search space for computer vision, making it much simpler and more robust. The NavShoe device provides not only robust approximate position, but also an extremely accurate orientation tracker on the foot.

1,432 citations


Journal ArticleDOI
Chaomei Chen1
TL;DR: The top 10 unsolved problems list described in this article is a revised and extended version of information visualization problems, highlighting issues from a user-centered perspective and technical challenges in nature.
Abstract: The top 10 unsolved problems list described in this article is a revised and extended version of information visualization problems. These problems are not necessarily imposed by technical barriers, rather, they are problems that might hinder the growth of information visualization as a field. The first three problems highlight issues from a user-centered perspective. The fifth, sixth, and seventh problems are technical challenges in nature. The last three are the ones that need tackling at the disciplinary level. The author broadly defines information visualization as visual representations of the semantics, or meaning, of information. In contrast to scientific visualization, information visualization typically deals with nonnumeric, nonspatial, and high-dimensional data.

358 citations


Journal ArticleDOI
TL;DR: This article describes how a multi-disciplinary research team transformed core MR technology and methods into diverse urban terrain applications that are used for military training and situational awareness, as well as for community learning to significantly increase the entertainment, educational, and satisfaction levels of existing experiences in public venues.
Abstract: Transferring research from the laboratory to mainstream applications requires the convergence of people, knowledge, and conventions from divergent disciplines. Solutions involve more than combining functional requirements and creative novelty. To transform technical capabilities of emerging mixed reality (MR) technology into the mainstream involves the integration and evolution of unproven systems. For example, real-world applications require complex scenarios (a content issue) involving an efficient iterative pipeline (a production issue) and driving the design of a story engine (a technical issue) that provides an adaptive experience with an after-action review process (a business issue). This article describes how a multi-disciplinary research team transformed core MR technology and methods into diverse urban terrain applications. These applications are used for military training and situational awareness, as well as for community learning to significantly increase the entertainment, educational, and satisfaction levels of existing experiences in public venues.

269 citations


Journal ArticleDOI
TL;DR: 10 AR projects from those the research, development, and deployment of AR systems in the automotive, aviation, and astronautics industries for more than five years are selected to examine the main challenges faced and to share some of the lessons learned.
Abstract: The 2003 International Symposium on Mixed and Augmented Reality was accompanied by a workshop on potential industrial applications. The organizers wisely called it potential because the real use of augmented reality (AR) in an industrial context is still in its infancy. Our own experience in this field clearly supports this viewpoint. We have been actively involved in the research, development, and deployment of AR systems in the automotive, aviation, and astronautics industries for more than five years and have developed and implemented AR systems in a wide variety of environments while working at DaimlerChrysler in Germany. In this article we have selected 10 AR projects from those we have managed and implemented in the past to examine the main challenges we faced and to share some of the lessons we learned.

248 citations


Journal Article
TL;DR: The PC's increasing graphical-processing power is fueling a demand for larger and more capable display devices, and this fact, coupled with graphic-card advancements has led to an increase in multiple monitor (multimon) use.
Abstract: As large displays become more affordable, researchers are investigating the effects on productivity, and techniques for making the large display user experience more effective. Recent work has demonstrated significant productivity benefits, but has also identified numerous usability issues that inhibit productivity. Studies show that larger displays enable users to create and manage many more windows, as well as to engage in more complex multitasking behavior. In this paper, we describe various usability issues, including losing track of the cursor, accessing windows and icons at a distance, dealing with bezels in multimonitor displays, window management, and task management. We also present several novel techniques that address these issues and make users more productive on large display systems. Author

218 citations


Journal ArticleDOI
TL;DR: This article focuses on expert reviews that were used for the applications and encourages more experimentation with this technique, particularly to develop a good set of visualization heuristics and to compare it with other methods.
Abstract: Visualization research generates beautiful images and impressive interactive systems. Emphasis on evaluating visualizations is growing. Researchers have successfully used alternative evaluation techniques in human-computer interaction (HCI), including focus groups, field studies, and expert reviews. These methods tend to produce qualitative results and require fewer participants than controlled experiments. In this article, we focus on expert reviews that we used for the applications. We commonly use expert reviews to assess interface usability. Expert reviews can generate valuable feedback on visualization tools. We recommend i) including experts with experience in data display as well as usability, and ii) developing heuristics based on visualization guidelines as well as usability guidelines. Expert reviews should not be used exclusively, since experts might not hilly predict end-user actions. Furthermore, we encourage more experimentation with this technique, particularly to develop a good set of visualization heuristics and to compare it with other methods.

213 citations


Journal ArticleDOI
TL;DR: The PC's increasing graphical processing power is fueling a demand for larger and more capable display devices Several operating systems have supported work with multiple displays for some time This fact, coupled with graphic-card advancements has led to an increase in multiple monitor (multimon) use Large displays offer users significant benefits and usability challenges as mentioned in this paper.
Abstract: The PC's increasing graphical-processing power is fueling a demand for larger and more capable display devices Several operating systems have supported work with multiple displays for some time This fact, coupled with graphic-card advancements has led to an increase in multiple monitor (multimon) use Large displays offer users significant benefits and usability challenges In this article the authors discuss those challenges along with novel techniques to address these issues

183 citations


Journal ArticleDOI
TL;DR: The same strategy that is used in converting color images to gray scale provides a method for recoloring the images to deliver increased information content to such observers.
Abstract: In spite of the ever-increasing prevalence of low-cost, color printing devices, gray-scale printers remain in widespread use. Authors producing documents with color images for any venue must account for the possibility that the color images might be reduced to gray scale before they are viewed. Because conversion to gray scale reduces the number of color dimensions, some loss of visual information is generally unavoidable. Ideally, we can restrict this loss to features that vary minimally within the color image. Nevertheless, with standard procedures in widespread use, this objective is not often achieved, and important image detail is often lost. Consequently, algorithms that convert color images to gray scale in a way that preserves information remain important. Human observers with color-deficient vision may experience the same problem, in that they may perceive distinct colors to be indistinguishable and thus lose image detail. The same strategy that is used in converting color images to gray scale provides a method for recoloring the images to deliver increased information content to such observers.

180 citations


Journal ArticleDOI
TL;DR: A sensor, dubbed GelForce, that acts as a practical tool in both conventional and novel desktop applications using common consumer hardware and can represent the magnitude and direction of force applied to the skin's surface using computer vision.
Abstract: The desire to reproduce and expand the human senses drives innovations in sensor technology. Conversely, human-interface research aims to allow people to interact with machines as if they were natural objects in a cybernetic, human-oriented way. We wish to unite the two paradigms with a haptic sensor as versatile as the sense of touch and developed for a dual purpose: to improve the robotic capability to interact with the physical world, and to improve the human capability to interact with the virtual world for emerging applications with a heightened sense of presence. We designed a sensor, dubbed GelForce, that acts as a practical tool in both conventional and novel desktop applications using common consumer hardware. By measuring a surface traction field, the GelForce tactile sensor can represent the magnitude and direction of force applied to the skin's surface using computer vision. This article is available with a short video documentary on CD-ROM.

169 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used AR to treat phobias such as fear of flying, agoraphobia, claustrophobia, and phobia to insects and small animals.
Abstract: Virtual reality (VR) is useful for treating several psychological problems, including phobias such as fear of flying, agoraphobia, claustrophobia, and phobia to insects and small animals. We believe that augmented reality (AR) could also be used to treat some psychological disorders. AR and VR share some advantages over traditional treatments. However, AR gives a greater feeling of presence (the sensation of being there) and reality judgment (judging an experience as real) than VR because the environment and the elements the patient uses to interact with the application are real. Moreover, in AR users see their own hands, feet, and so on, whereas VR only simulates this experience. With these differences in mind, the question arises as to the kinds of psychological treatments AR and VR are most suited for. In our system, patients see their own hands, feet, and so on. They can touch the table that animals are crossing or seeing their feet while the animals are running on the floor. They can also hold a marker with a dead spider or cockroach or pick up a flyswatter, a can of insecticide, or a dustpan.

139 citations


Journal ArticleDOI
TL;DR: A wavelet-based HDR still-image encoding method that maps the logarithm of each pixel value into integer values and then sends the results to a JPEG 2000 encoder to meet the HDR encoding requirement.
Abstract: The raw size of a high-dynamic-range (HDR) image brings about problems in storage and transmission. Many bytes are wasted in data redundancy and perceptually unimportant information. To address this problem, researchers have proposed some preliminary algorithms to compress the data, like RGBE/XYZE, OpenEXR, LogLuv, and so on. HDR images can have a dynamic range of more than four orders of magnitude while conventional 8-bit images retain only two orders of magnitude of the dynamic range. This distinction between an HDR image and a conventional image leads to difficulties in using most existing image compressors. JPEG 2000 supports up to 16-bit integer data, so it can already provide image compression for most HDR images. In this article, we propose a JPEG 2000-based lossy image compression scheme for HDR images of all dynamic ranges. We show how to fit HDR encoding into a JPEG 2000 encoder to meet the HDR encoding requirement. To achieve the goal of minimum error in the logarithm domain, we map the logarithm of each pixel value into integer values and then send the results to a JPEG 2000 encoder. Our approach is basically a wavelet-based HDR still-image encoding method.

Journal ArticleDOI
TL;DR: An untethered VR system with force feedback based on the use of air pressure gives the user the feeling of touching an object as well as a projection-based stereo display and optical position tracking.
Abstract: An untethered VR system with force feedback based on the use of air pressure gives the user the feeling of touching an object. The result is an untethered human interface that's easy to use in everyday life. The system uses a projection-based stereo display and optical position tracking. The user employs red and blue 3D glasses and a lightweight paddle. This article is available with a short video documentary on CD-ROM.

Journal ArticleDOI
TL;DR: The CirculaFloor locomotion interface's movable tiles employ a holonomic mechanism to achieve omnidirectional motion, and users can thus maintain their position while walking in a virtual environment using a set of movable tile.
Abstract: The CirculaFloor locomotion interface's movable tiles employ a holonomic mechanism to achieve omnidirectional motion. Users can thus maintain their position while walking in a virtual environment. The CirculaFloor method exploits both the treadmill and footpad, creating an infinite omnidirectional surface using a set of movable tiles. The tiles provide a sufficient area for walking, and thus precision tracing of the foot position is not required. This method has the potential to create an uneven surface by mounting an up-and-down mechanism on each tile. This article is available with a short video documentary on CD-ROM.

Journal ArticleDOI
TL;DR: A dual approach to helping students improve their spatial abilities is proposed, based on computer graphics applications and a sketch-based modeling, system that can capture students' attention and foster two important engineering skills: freehand sketching and an understanding of the relationship between orthographic and axonometric views.
Abstract: We discuss the importance of visualization skills in engineering education and propose a dual approach to helping students improve their spatial abilities. The approach is based on computer graphics applications, and uses both Web-based graphics applications and a sketch-based modeling, system. By combining these approaches, we can capture students' attention and foster two important engineering skills: freehand sketching and an understanding of the relationship between orthographic and axonometric views.

Journal ArticleDOI
TL;DR: Some of the current applications of computer graphics to dance including visualizing choreography, composing, editing and animating dance notation, and enhancing live performance are described.
Abstract: As computer technology has developed and become less expensive, many artists have found ways to use it to enhance their performances with interactive multimedia. This has included incorporating computer-generated images and sound with live dance performance and using sensors that let the live dancers' movements control imagery, sound, and a wide variety of special effects. This overview describes some of the current applications of computer graphics to dance including visualizing choreography, composing, editing and animating dance notation, and enhancing live performance.

Journal ArticleDOI
TL;DR: Interactive projection as discussed by the authors is a technique for moving a cursor across a projection, allowing all the familiar mouse interactions from the desktop to be transposed to a handheld projector, and requiring only natural, one-handed pointing motion by the user.
Abstract: Handheld projection lets users create opportunistic displays on any suitable nearby surface. This projection method is a practical and useful addition to mobile computing. Incorporating a projector into a handheld device raises the issue of how to interact with a projection. The focus of the article is interactive projection: a technique for moving a cursor across a projection, allowing all the familiar mouse interactions from the desktop to be transposed to a handheld projector, and requiring only natural, one-handed pointing motion by the user. This article is available with a short video documentary on CD-ROM.

Journal ArticleDOI
TL;DR: The Java-hosted algorithm visualization environment (JHAVE) is not an AV system itself but rather a support environment for a variety of AV systems (called AV engines by JHAVE).
Abstract: JHAVE fosters the use of algorithm visualization as an effective pedagogical tool for computer science educators, helping students to better understand algorithms. The Java-hosted algorithm visualization environment (JHAVE) is not an AV system itself but rather a support environment for a variety of AV systems (called AV engines by JHAVE). In broad terms, JHAVE gives such an engine a drawing context on which it can render its pictures in any way. In return, JHAVE provides the engine with effortless ways to synchronize its graphical displays with i) a standard set of VCR-like controls, ii) information and pseudocode windows, iii) input generators, iv) stop-and-think questions, and v) meaningful content generation tools.

Journal ArticleDOI
TL;DR: This survey looks at visualization techniques used in science and engineering education to enhance student learning and encourage underrepresented students to pursue technical degrees.
Abstract: This survey looks at visualization techniques used in science and engineering education to enhance student learning and encourage underrepresented students to pursue technical degrees. This article aims to encourage faculty in science, technology, engineering, and math (STEM) disciplines to use visual methods to communicate to their students. Visual learning is an important method for exploiting students' visual senses to enhance learning and engage their interest. This methodology also has the potential to increase the number of students in STEM fields, especially of women and minority students. A visual approach to science and engineering enhances communication. This visualization revolution shows that letting scientists engage the higher cognitive parts of the brain by thinking and communicating visually improved how they performed their research.

Journal ArticleDOI
TL;DR: This work introduces multivalue data as a new data type in the context of scientific visualization, while this data type has existed in other fields, the visualization community has largely ignored it.
Abstract: We introduce multivalue data as a new data type in the context of scientific visualization. While this data type has existed in other fields, the visualization community has largely ignored it. Formally, a multivalue datum is a collection of values about a single variable. Multivalue data sets can be defined for multiple dimensions. A spatial multivalue data set consists of a multivalue datum at each physical location in the domain. The time dimension is equally valid. This leads to spatio-temporal multivalue data sets where there is time varying, multidimensional data with a multivalue datum at each location and time. The spatial multivalue data type captures multiple instances of the same variable at each location in space. Visualizing spatial multivalue data sets is a new challenge.

Journal ArticleDOI
TL;DR: The storage bin mobile storage mechanism is developed, which combines the space-preserving features of existing peripheral storage mechanisms with the capability to relocate stored items in the workspace.
Abstract: The ability to store resource items anywhere in the workspace and move them around can be critical for coordinating task and group interactions on a table. However, existing casual storage techniques for digital workspaces only provide access to stored items at the periphery of the workspace, potentially compromising collaborative interactions at a digital tabletop display. To facilitate this storage behavior in a digital tabletop workspace, we developed the storage bin mobile storage mechanism, which combines the space-preserving features of existing peripheral storage mechanisms with the capability to relocate stored items in the workspace. A user study explores the utility of storage bins on tabletop display collaboration.

Journal ArticleDOI
TL;DR: The goal of this paper is to synthesize realistic eye-gaze and blink motion, accounting for any possible correlations between the two, using a data-driven texture synthesis approach.
Abstract: Modeling human eyes requires special care. The goal of this paper is to synthesize realistic eye-gaze and blink motion, accounting for any possible correlations between the two. The technique adopts a data-driven texture synthesis approach to the problem of synthesizing realistic eye motion. The basic assumption is that eye gaze probably has some connection with eyelid motion, as well as with head motion and speech. But the connection is not strictly deterministic and would be difficult to characterize explicitly. A major advantage of the data-driven approach is that the investigator does not need to determine whether these apparent correlations actually exist. If the correlations occur in the data, the synthesis (properly applied) will reproduce them.

Journal ArticleDOI
TL;DR: To create a scalable and easy-to-use large-format display system for collaborative visualization, the authors have developed various techniques, software tools, and applications.
Abstract: Increased processor and storage capacities have supported the computational sciences, but have simultaneously unleashed a data avalanche on the scientific community. As a result, scientific research is limited by data analysis and visualization capabilities. These new bottlenecks have been the driving motivation behind the Princeton scalable display wall project. To create a scalable and easy-to-use large-format display system for collaborative visualization, the authors have developed various techniques, software tools, and applications.

Journal ArticleDOI
TL;DR: The main goal of the volume illustration approach is to enhance the expressiveness of volume rendering by highlighting important features within a volume while subjugating insignificant details, and rendering the result in a way that resembles an illustration.
Abstract: The enormous amount of 3D data generated by modern scientific experiments, simulations, and scanners exacerbates the tasks of effectively exploring, analyzing, and communicating the essential information from these data sets. The expanding field of biomedicine creates data sets that challenge current techniques to effectively communicate information for use in diagnosis, staging, simulation, and training. In contrast, medical illustration succinctly represents essential anatomical structures in a clear way and is used extensively in the medical held for communicative and illustrative purposes. Thus, the idea of rendering real medical data sets using traditional medical illustrative styles inspired work in volume illustration. The main goal of the volume illustration approach is to enhance the expressiveness of volume rendering by highlighting important features within a volume while subjugating insignificant details, and rendering the result in a way that resembles an illustration. Recent approaches have been extended to interactive volume illustration by using PC graphics hardware volume rendering to accelerate the enhanced rendering, resulting in nearly interactive rates.

Journal ArticleDOI
TL;DR: Time-Followers's Vision is a mixed-reality-based visual presentation system that captures a robotic vehicle's size, position, and environment, allowing even inexperienced operators to easily control it.
Abstract: Time-Followers's Vision is a mixed-reality-based visual presentation system that captures a robotic vehicle's size, position, and environment, allowing even inexperienced operators to easily control it. The technique produces a virtual image using mixed reality technology and presents the vehicle's surrounding environment and status to the operator. Therefore, even for inexperienced operators, the vehicle's position and orientation and the surrounding situation can be readily understood. The authors implement a prototype system and evaluate its feasibility. This article is available with a short video documentary on CD-ROM.

Journal ArticleDOI
TL;DR: The argument in this article is that it's time to begin synthesizing the fragments and views into a level 3 model, an ontology of visualization, and address why this should happen, what is already in place, how such a ontology might be constructed, and why now.
Abstract: Visualizers, like logicians, have long been concerned with meaning. Generalizing from MacEachren's overview of cartography, visualizers have to think about how people extract meaning from pictures (psychophysics), what people understand from a picture (cognition), how pictures are imbued with meaning (semiotics), and how in some cases that meaning arises within a social and/or cultural context. If we think of the communication acts carried out in the visualization process further levels of meaning are suggested. Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning. We offer the following three steps, in increasing order of formality: terminology (jargon), taxonomy (vocabulary), and ontology. Our argument in this article is that it's time to begin synthesizing the fragments and views into a level 3 model, an ontology of visualization. We also address why this should happen, what is already in place, how such an ontology might be constructed, and why now.

Journal ArticleDOI
TL;DR: This article extended the standard bilateral filtering method and built a new local adaptive noise reduction kernel that suppresses the outliers and interpixel incoherence in a noniterative way.
Abstract: Monte Carlo noise appears as outliers and as interpixel incoherence in a typical image rendered at low sampling density Unfortunately, none of the previous approaches can reduce both types of noise in a unified way In this article, we propose such a unified Monte Carlo noise reduction approach using bilateral filtering We extended the standard bilateral filtering method and built a new local adaptive noise reduction kernel The new operator suppresses the outliers and interpixel incoherence in a noniterative way

Journal ArticleDOI
TL;DR: Bounded-blending operations that are defined using R-functions and displacement functions with the localized area of influence could replace pure set-theoretic operations in the construction of a solid without rebuilding the entire construction tree data structure.
Abstract: New analytical formulations of bounded blending operations can enhance function-based constructive shape modeling. In this article, we introduce bounded-blending operations that we define using R-functions and displacement functions with the localized area of influence. We define the shape and location of the blend by control points on the surfaces of two solids or by an additional bounding solid. We can apply our proposed blending using a bounding solid to a single selected edge or vertex. We also introduce new multiple blends and a partial edge blend. Our description supports set-theoretic operations on blends and blends on blends - that is, recursive blends. In this sense, our proposed operations could replace pure set-theoretic operations in the construction of a solid without rebuilding the entire construction tree data structure. Our proposed blending method can have application in interactive design.

Journal ArticleDOI
TL;DR: Augmented reality systems are usually interactive; thus, it is necessary to verify usability to determine if the system is effective, and the AR research community needs to resolve human factors issues.
Abstract: Augmented reality (AR) has been part of computer graphics methodology for decades. A number of prototype AR systems have shown the possibilities this paradigm creates. Mixing graphical annotations and objects in a user's view of the surrounding environment offers a powerful metaphor for conveying information about that environment. AR systems' potential still exceeds the practice. In fact, most AR systems remain laboratory prototypes. There are several reasons for this; two of the most prominent are that researchers need more advanced hardware than currently available to implement the systems, and (the subject of this article) the AR research community needs to resolve human factors issues. AR systems are usually interactive; thus, we must verify usability to determine if the system is effective.

Journal ArticleDOI
TL;DR: Delta3D is actually a thin, unifying layer that sits atop many open source game engines the authors might already use that has a high-level, cross-platform (Win32 and Linux) C++ API designed with programmers in mind to soften the learning curve, but always makes lower levels of abstraction available to the developer.
Abstract: Game engines have so much in common that we have to wonder if they should actually be a commodity and they should be. That advanced feature that makes one game engine different from all the rest is part of the reason why game engines and visual simulation tools cost so much. Furthermore, most game engines have a unique development pipeline associated with them. The way content is developed and integrated is specific to that engine, implying limited (if any) portability and reuse. This business model is perfectly appropriate for the entertainment industry where having the latest graphics features can make or break a title, but to the training community, the model simply does not work. Delta3D is actually a thin, unifying layer that sits atop many open source game engines we might already use. It has a high-level, cross-platform (Win32 and Linux) C++ API designed with programmers in mind to soften the learning curve, but always makes lower levels of abstraction available to the developer. Programmers can develop content through the level editor - they can write Python script to the Delta3D API or to the underlying tools directly. Delta3D uses the standard Lesser GNU Public License (LGPL). It's completely modular and allows a best-of-breed approach whereby any module can be swapped out if a better option becomes available.

Journal ArticleDOI
TL;DR: Comparison of oxygen and carbon isotope and manganese evolution curves in bulk carbonate from the historical Bedoulian stratotype reveals an important geochemical event located within the D. deshayesi ammonite Zone and at the base of the R. hambrowi ammonite Subzone.
Abstract: Comparison of oxygen and carbon isotope and manganese evolution curves in bulk carbonate from the historical Bedoulian stratotype (Cassis-La Bedoule area, Provence, France) reveals an important geochemical event (negative δ 13 C and high Mn content) located within the D. deshayesi ammonite Zone and at the base of the R. hambrowi ammonite Subzone. This worldwide event, which can be observed in environments ranging from the fluvial to the pelagic realm (Selli/Goguel level), seems to be related to methane hydrate destabilization. Scenarios for manganese, carbon and oxygen evolutions are proposed for early Bedoulian oxic conditions and for dysoxic/anoxic conditions related to methane hydrate destabilization at the early/late Bedoulian transition. The impacts of this global event on the biosphere (nannoconid crisis) and its stratigraphic implications are considered. Comparison of geochemical and biostratigraphical data from the Cassis-La Bedoule stratotype with that of the Cismon- Apticore reference borehole shows that the La Bedoule sequence records geochemical evolution during the Goguel/Selli Event in more detail than that of any other previously published section.