scispace - formally typeset
Search or ask a question

Showing papers in "Information Visualization in 2010"


Journal ArticleDOI
TL;DR: The spectrum of current representation techniques used on single trees, pairs of trees and finally multiple trees are discussed, in order to identify which representations are best suited to particular tasks and to find gaps in the representation space.
Abstract: This article summarises the current state of research into multiple tree visualisations. It discusses the spectrum of current representation techniques used on single trees, pairs of trees and finally multiple trees, in order to identify which representations are best suited to particular tasks and to find gaps in the representation space, in which opportunities for future multiple tree visualisation research may exist. The application areas from where multiple tree data are derived are enumerated, and the distinct structures that multiple trees make in combination with each other and the effect on subsequent approaches to their visualisation are discussed, along with the basic high-level goals of existing multiple tree visualisations.

140 citations


Journal ArticleDOI
TL;DR: This work reviewed existing work from several domains on uncertainty and created a classification of uncertainty based on the literature, finding commonalities in uncertainty across domains and believe this refined classification will help in developing appropriate visualizations for each category of uncertainty.
Abstract: Uncertainty in data occurs in domains ranging from natural science to medicine to computer science. By developing ways to include uncertainty in our information visualizations, we can provide more accurate depictions of critical data sets so that people can make more infc.rmed decisions.One hindrance to visualizing uncertainty is that we must first understand what uncertainty is and how it is expressed. We reviewed existing work from several domains on uncertainty and created a classification of uncertainty based on the literature. We empirically evaluated and improved upon our classification by conducting interviews with 18 people from several domains,who self-identified as working with uncertainty. Participants desuibed what uncertainty looks like in their data and how they deal with it. We found commonalities in uncertainty across domains and believe our refined classification will help us in developing appropriate visualizations for each category of uncertainty.

137 citations


Journal ArticleDOI
TL;DR: It is concluded that users can reliably distinguish twice as many different correlation levels when using scatterplots as when usingPCPs, and that there is a bias towards reporting negative correlations when using PCPs.
Abstract: Scatterplots and parallel coordinate plots (PCPs) that can both be used to assess correlation visually. In this paper, we compare these two visualization methods in a controlled user experiment. More specifically, 25 participants were asked to report observed correlation as a function of the sample correlation under varying conditions of visualization method, sample size and observation time. A statistical model is proposed to describe the correlation judgment process. The accuracy and the bias in the judgments in different conditions are established by interpreting the parameters in this model. A discriminability index is proposed to characterize the performance accuracy in each experimental condition. Moreover, a statistical test is applied to derive whether or not the human sensation scale differs from a theoretically optimal (that is, unbiased) judgment scale. Based on these analyses, we conclude that users can reliably distinguish twice as many different correlation levels when using scatterplots as when using PCPs. We also find that there is a bias towards reporting negative correlations when using PCPs. Therefore, we conclude that scatterplots are more effective than parallel plots in supporting visual correlation analysis.

120 citations


Journal ArticleDOI
TL;DR: The generalized scatter plot technique is proposed, which allows an overlap-free representation of large data sets to fit entirely into the display, and an optimization function that takes overlap and distortion of the visualization into acccount is identified.
Abstract: Scatter Plots are one of the most powerful and most widely used techniques for visual data exploration. A well-known problem is that scatter plots often have a high degree of overlap, which may occlude a significant portion of the data values shown. In this paper, we propose the generalized scatter plot technique, which allows an overlap-free representation of large data sets to fit entirely into the display. The basic idea is to allow the analyst to optimize the degree of overlap and distortion to generate the bestpossible view. To allow an effective usage, we provide the capability to zoom smoothly between the traditional and our generalized scatter plots. We identify an optimization function that takes overlap and distortion of the visualization into acccount. We evaluate the generalized scatter plots according to this optimization function, and show that there usually exists an optimal compromise between overlap and distortion. Our generalized scatter plots have been applied successfully to a number of real-world IT services applications, such as server performance monitoring, telephone service usage analysis and financial data, demonstrating the benefits of the generalized scatter plots over traditional ones.

96 citations


Journal ArticleDOI
TL;DR: The methodology has a twofold application, allowing us to detect significant differences that help characterize patterns of behaviour of a geographical system of output, along with the generation of representations that serve as interfaces for domain analysis and information retrieval.
Abstract: In this study, visual representations are created in order to analyze different aspects of scientific collaboration at the international level. The main objective is to identify the international facet of research by following the flow of knowledge as expressed by the number of scientific publications, and then establishes the main geographical axes of output, showing the interrelationships of the domain, the intensity of these relations, and howthe different types of collaboration are reflected in terms of visibility. Thus, the methodology has a twofold application, allowing us to detect significant differences that help characterize patterns of behaviour of a geographical system of output, along with the generation of representations that serve as interfaces for domain analysis and information retrieval.

80 citations


Journal ArticleDOI
TL;DR: This work distinguishes an automatic visualization system (AVS) from an automated visualization system, a programming system for automating the production of charts, graphs and visualizations, designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models.
Abstract: AutoVis is a data viewer that responds to content - text, relational tables, hierarchies, streams, images - and displays the information appropriately (that is, as an expert would). Its design rests on the grammar of graphics, scagnostics and a modeler based on the logic of statistical analysis. We distinguish an automatic visualization system (AVS) from an automated visualization system. The former automatically makes decisions about what is to be visualized. The latter is a programming system for automating the production of charts, graphs and visualizations. An AVS is designed to provide a first glance at data before modeling and analysis are done. AVS is designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models. The design of this system incorporates several unique features: (1) a spare interface - analysts simply drag a data source into an empty window, (2) a graphics generator that requires no user definitions to produce graphs, (3) a statistical analyzer that protects users from false conclusions, and (4) a pattern recognizer that responds to the aspects (density, shape, trend, and so on) that professional statisticians notice when investigating data sets.

76 citations


Journal ArticleDOI
TL;DR: A novel metric called the mean area exponent is introduced that quantifies the distribution of area across nodes in a tree representation, and that can be applied to a broad range of different representations of trees.
Abstract: A mathematical evaluation and comparison of the space-efficiency of various 2D graphical representations of tree structures is presented. As part of the evaluation, a novel metric called the mean area exponent is introduced that quantifies the distribution of area across nodes in a tree representation, and that can be applied to a broad range of different representations of trees. Several representations are analyzed and compared by calculating their mean area exponent as well as the area they allocate to nodes and labels. Our analysis inspires a set of design guidelines as well as a few novel tree representations that are also presented.

67 citations


Journal ArticleDOI
TL;DR: 2D and 3D representations within a time geographical visual analysis tool for activity diary data are compared to show that the 3D representation has benefits over the 2D representation for feature identification but also indicate that these benefits can be lost if the3D representation is not carefully constructed to help the user to see them.
Abstract: Time geographical representations are becoming a common approach to analysing spatio-temporal data. Such representations appear intuitive in the process of identifying patterns and features as paths of populations form tracks through the 3D space, which can be seen converging and diverging over time. In this article, we compare 2D and 3D representations within a time geographical visual analysis tool for activity diary data. We identify a representative task and evaluate task performance between the two representations. The results show that the 3D representation has benefits over the 2D representation for feature identification but also indicate that these benefits can be lost if the 3D representation is not carefully constructed to help the user to see them.

23 citations


Journal ArticleDOI
TL;DR: This framework visualizes quantitative attributes of nodes in a network as a continuous surface by interpolating the scalar field, therefore avoiding scalability issues typical in conventional network visualizations while also maintaining the topological properties of the original network.
Abstract: We propose a new network visualization technique using scattered data interpolation and surface rendering, based upon a foundation layout of a scalar field. Contours of the interpolated surfaces are generated to support multi-scale visual interaction for data exploration. Our framework visualizes quantitative attributes of nodes in a network as a continuous surface by interpolating the scalar field, therefore avoiding scalability issues typical in conventional network visualizations while also maintaining the topological properties of the original network. We applied this technique to the study of a bio-molecular interaction network integrated with gene expression data for Alzheimer's Disease (AD). In this application, differential gene expression profiles obtained from the human brain are rendered for AD patients with differing degrees of severity and compared to healthy individuals. We show that this alternative visualization technique is effective in revealing several types of molecular biomarkers, which are traditionally difficult to detect due to "noises" in data derived from DNA microarray experiments.

21 citations


Journal ArticleDOI
TL;DR: This article presents the scientogram of the United States for the year 2002, identifying its essential structure and tries to detect patterns and tendencies in the three scientograms that would allow one to predict or flag the evolution of a scientific domain.
Abstract: Category cocitation and its representation through social networks is proving to be a very adequate technique for the visualization and analysis of great scientific domains. Its combination with pathfinder networks using pruning values r=∞ and q=n-1 makes manifest the essence of research in the domain represented, or what we might call the 'most salient structure'. The possible loss of structural information, caused by aggressive pruning in peripheral areas of the networks, is overcome by creating heliocentric maps for each category. The depictions obtained with this procedure become tools of great usefulness in view of their capacity to reveal the evolution of a given scientific domain over time, to show differences and similarities between different domains, and to suggest possible new lines for development. This article presents the scientogram of the United States for the year 2002, identifying its essential structure. We also show the scientograms of China for the years 1990 and 2002, in order to study its particular national evolution. Finally, we try to detect patterns and tendencies in the three scientograms that would allow one to predict or flag the evolution of a scientific domain.

21 citations


Journal Article
TL;DR: The DBBC2 system is in a mature phase now, and the deployment is continuing, and a review of the backend is shown, andThe new functionalities in development are reported.
Abstract: The Digital Base Band Converter-2 (DBBC2) system is in a mature phase now, and the deployment is continuing. A review of the backend is shown, and the new functionalities in development are reported.

Journal ArticleDOI
TL;DR: In terms of effectiveness and efficiency the weak 3D visualization is as good as the strong 3D and thus the need for advanced 3D visualizations in these kinds of tasks may not be necessary.
Abstract: New technologies and techniques allow novel kinds of visualizations and different types of 3D visualizations are constantly developed. We propose a categorization of 3D visualizations and, based on this categorization, evaluate two versions of a space-time cube that show discrete spatiotemporal data. The two visualization techniques used are a head-tracked stereoscopic visualization ('strong 3D') and a static monocular visualization ('weak 3D'). In terms of effectiveness and efficiency the weak 3D visualization is as good as the strong 3D and thus the need for advanced 3D visualizations in these kinds of tasks may not be necessary.

Journal ArticleDOI
TL;DR: The objective in this paper is to visualize the semantics of filmscript, and beyond filmscript any other partially structured, time-ordered sequence of text segments, and an innovative approach to plot characterization is developed.
Abstract: We relate tag clouds to other forms of visualization, including planar or reduced dimensionality mapping, and to Kohonen self-organizing maps. Using a modified tag cloud visualization, we incorporate other information into it, including text sequence and most pertinent words. Our notion of word pertinence goes beyond just word frequency and instead takes a word in a mathematical sense as located at the average of all of its pairwise relationships. We capture semantics through context, taken as all pairwise relationships. Our domain of application is that of filmscript analysis. The analysis of filmscripts, always important for cinema, is experiencing a major gain in importance in the context of television. Our objective in this paper is to visualize the semantics of filmscript, and beyond filmscript any other partially structured, time-ordered sequence of text segments. In particular, we develop an innovative approach to plot characterization.

Journal Article
TL;DR: A digital backend based on the ROACH board has been developed jointly by the National Radio Astronomy Observatory and MIT Haystack Observatory and the RDBE will have both Polyphase Filterbank and Digital Downconverter personalities.
Abstract: A digital backend based on the ROACH board has been developed jointly by the National Radio Astronomy Observatory and MIT Haystack Observatory. The RDBE will have both Polyphase Filterbank and Digital Downconverter personalities. The initial configuration outputs sixteen 32-MHz channels, comprised of half the channels from the PFB processing of the two IF inputs, for use in the VLBI2010 geodetic system and in the VLBA sensitivity upgrade project. The output rate is 2x109 bits/second (1x10(exp 9) bits/sec = 1 Gbps) over a 10 GigE connection to the Mark 5C with the data written in Mark 5B format on disk.

Journal ArticleDOI
TL;DR: It is shown that awareness of the composition principles used by other animators and visual artists can help programmers to create better algorithmic animations.
Abstract: This paper deals with techniques for the design and production of appealing algorithmic animations and their use in computer science education. A good visual animation is both a technical artifact and a work of art that can greatly enhance the understanding of an algorithm's workings. In the first part of the paper, I show that awareness of the composition principles used by other animators and visual artists can help programmers to create better algorithmic animations. The second part shows how to incorporate those ideas in novel animation systems, which represent data structures in a visually intuitive manner. The animations described in this paper have been implemented and used in the classroom for courses at university level.

Proceedings ArticleDOI
TL;DR: This is the beginning where a guidelines manual is defined, act to help specialists of information visualization in the vessel monitoring field, and in the GIS field more in general.
Abstract: In information systems the data representation covers a great importance. In fact the visualization of information is the last point of contact between the user and the information system. This is the space where the communication takes place. In real-time monitoring systems, this passage covers a great importance, especially for reasons related to the time and the transparency of relevant information. These factors are fundamental to vessel monitoring systems. This is the beginning where we start to define a guidelines manual, act to help specialists of information visualization in the vessel monitoring field, and in the GIS field more in general.

Journal Article
TL;DR: The real accuracy of CPO prediction is assessed using the actual IERS and PUL predictions made in 2007-2009 and results of operational processing was analyzed to investigate the actual impact of EOP prediction errors on the rapid UT1 results.
Abstract: The UT1 Intensive results heavily depend on the celestial pole offset (CPO) model used during data processing. Since accurate CPO values are available with a delay of two to four weeks, CPO predictions are necessarily applied to the UT1 Intensive data analysis, and errors in the predictions can influence the operational UT1 accuracy. In this paper we assess the real accuracy of CPO prediction using the actual IERS and PUL predictions made in 2007-2009. Also, results of operational processing were analyzed to investigate the actual impact of EOP prediction errors on the rapid UT1 results. It was found that the impact of CPO prediction errors is at a level of several microseconds, whereas the impact of the inaccuracy in the polar motion prediction may be about one order of magnitude larger for ultra-rapid UT1 results. The situation can be amended if the IERS Rapid solution will be updated more frequently.

Journal ArticleDOI
TL;DR: This design study reports on how information visualization was applied to the common problem for knowledge workers to find people with relevant expertise and interests in a large organization and claims that the Pinpoint concept approaches desirability as well as usefulness and usability in its intended use situation.
Abstract: This design study reports on how information visualization was applied to the common problem for knowledge workers to find people with relevant expertise and interests in a large organization. The outcome is the Pinpoint concept, an interactive visualization where the user's most closely related colleagues are presented radially for browsing, filtering and further exploration of the topical networks of the organization. Based on design reasoning and empirical validation, we claim that the concept approaches desirability as well as usefulness and usability in its intended use situation. We further argue that it is feasible for deployment under certain conditions, and that it is applicable in a range of large organizations if organization-specific policies and standards are taken into account.

Journal ArticleDOI
TL;DR: Results show that Isomap is significantly better at preserving local structural detail than MDS, suggesting it is better suited to cluster growing and other semantic navigation tasks, and it is shown that applying a minimum-cost graph pruning criterion can provide a parameter-free alternative to the traditional K-neighbour method.
Abstract: Previous work has shown that distance-similarity visualisation or 'spatialisation' can provide a potentially useful context in which to browse the results of a query search, enabling the user to adopt a simple local foraging or 'cluster growing' strategy to navigate through the retrieved document set. However, faithfully mapping feature-space models to visual space can be problematic owing to their inherent high dimensionality and non-linearity. Conventional linear approaches to dimension reduction tend to fail at this kind of task, sacrificing local structural in order to preserve a globally optimal mapping. In this paper the clustering performance of a recently proposed algorithm called isometric feature mapping (Isomap), which deals with non-linearity by transforming dissimilarities into geodesic distances, is compared to that of nonmetric multidimensional scaling (MDS). Various graph pruning methods, for geodesic distance estimation, are also compared. Results show that Isomap is significantly better at preserving local structural detail than MDS, suggesting it is better suited to cluster growing and other semantic navigation tasks. Moreover, it is shown that applying a minimum-cost graph pruning criterion can provide a parameter-free alternative to the traditional K-neighbour method, resulting in spatial clustering that is equivalent to or better than that achieved using an optimal-K criterion.

Journal Article
TL;DR: The Spanish-Portuguese project RAEGE (Red Atlantica de Estaciones Geodinamicas y Espaciales) as discussed by the authors aims to set up a Spanish and Portugal network of four Geodetic Fundamental Stations in Yebes (1), Canary Islands (1, and A cores Islands (2), as part of the developments needed for the IVS VLBI2010 scenario.
Abstract: Project RAEGE (Red Atlantica de Estaciones Geodinamicas y Espaciales) intends to set up a Spanish-Portuguese network of four Geodetic Fundamental Stations in Yebes (1), Canary Islands (1), and A cores Islands (2), as part of the developments needed for the IVS VLBI2010 scenario. It is envisaged that each Geodetic Fundamental Station will be equipped with one radio telescope of VLBI2010 specifications (at least 12-m diameter, fast slewing speed, but also able to operate up to 40 GHz), one gravimeter, one permanent GNSS station, and, at least at the Yebes site, one SLR facility. The National Geographical Institute of Spain (IGN) has experience in VLBI, having been a member of the European VLBI Network since 1993 and being one of the founding institutions of the Joint Institute for VLBI in Europe (JIVE), and it has been participating in geodetic VLBI campaigns with the 14-m radio telescope in Yebes since 1995. A new 40-m radio telescope has been built and was recently put into operation. It regularly participates in IVS sessions. There is infrastructure available for the new stations at Yebes and the Canary Islands. An agreement between IGN, the Portuguese Geographical Institute (IGP), and the Regional Government of the A cores ensures that the RAEGE project can become a reality by 2013.

Journal ArticleDOI
TL;DR: In this study of a collaborative task that required only a minimum of information to be shared, it was found that partitioned views with a lack of shared visual references were significantly less efficient than integrated views, but the study showed that subjects were equally capable of solving the task at low error levels in partitioned and integrated views.
Abstract: Multi-Viewer Display Environments (MVDE) provide unique opportunities to present personalized information to several users concurrently in the same physical display space. MVDEs can support correct 3D visualizations to multiple users, present correctly oriented text and symbols to all viewers and allow individually chosen subsets of information in a shared context. MVDEs aim at supporting collaborative visual analysis, and when used to visualize disjoint information in partitioned visualizations they even necessitate collaboration. When solving visual tasks collaboratively in a MVDE, overall performance is affected not only by the inherent effects of the graphical presentation but also by the interaction between the collaborating users. We present results from an empirical study where we compared views with lack of shared visual references in disjoint sets of information to views with mutually shared information. Potential benefits of 2D and 3D visualizations in a collaborative task were investigated and the effects of partitioning visualizations both in terms of task performance, interaction behavior and clutter reduction. In our study of a collaborative task that required only a minimum of information to be shared, we found that partitioned views with a lack of shared visual references were significantly less efficient than integrated views. However, the study showed that subjects were equally capable of solving the task at low error levels in partitioned and integrated views. An explorative analysis revealed that the amount of visual clutter was reduced heavily in partitioned visualization, whereas verbal and deictic communication between subjects increased. It also showed that the type of the visualization (2D/3D) affects interaction behavior strongly. An interesting result is that collaboration on complex geo-time visualizations is actually as efficient in 2D as in 3D.

Journal Article
TL;DR: First results obtained from the 24-hour IVS R1 and R4 sessions are shown, which can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides.
Abstract: Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.

Journal Article
TL;DR: It turned out that, in order to provide a realistic accuracy measure of the model coefficients, the formal errors should be inflated by a factor of three and this captures almost all of the d ifferences that are caused by different estimation techniques.
Abstract: Recent investigations have shown significant shortcomings in the model which is proposed by the IERS to account for the variations in the Earth s rotation with periods around one day and less. To overcome this, an empirical model can be estimated more or less directly from the observations of space geodetic techniques. The aim of this paper is to evaluate the quality and reliability of such a model based on VLBI observations. Therefore, the impact of the estimation method and the analysis options as well as the temporal stability are investigated. It turned out that, in order to provide a realistic accuracy measure of the model coefficients, the formal errors should be inflated by a factor of three. This coincides with the noise floor and the repeatability of the model coefficients and it captures almost all of the differences that are caused by different estimation techniques. The impact of analysis options is small but significant when changing troposphere parameterization or including harmonic station position variations.

Journal Article
TL;DR: The development status of the Chinese DBBC, the software and FPGA-based correlators, and the new VLBI antenna, as well as V LBI applications are summarized in this paper.
Abstract: VLBI technology development made significant progress at SHAO in the last few years. The development status of the Chinese DBBC, the software and FPGA-based correlators, and the new VLBI antenna, as well as VLBI applications are summarized in this paper.

Journal Article
TL;DR: In 2007, the IVS Directing Board established IVS Working Group 4 on VLBI Data Structures as mentioned in this paper, which was the first effort to develop a new format for data structures.
Abstract: In 2007 the IVS Directing Board established IVS Working Group 4 on VLBI Data Structures. This note discusses the current VLBI data format, goals for a new format, the history and formation of the Working Group, and a timeline for the development of a new VLBI data format.

Journal Article
TL;DR: The Mark 5C disk-based VLBI data system is being developed as the third-generation Mark 5 diskbased system, increasing the sustained data-recording rate capability to 4 Gbps.
Abstract: The Mark 5C disk-based VLBI data system is being developed as the third-generation Mark 5 disk-based system, increasing the sustained data-recording rate capability to 4 Gbps. It is built on the same basic platform as the Mark 5A, Mark 5B and Mark 5B+ systems and will use the same 8-disk modules as earlier Mark 5 systems, although two 8-disk modules will be necessary to support the 4 Gbps rate. Unlike its earlier brethren, which use proprietary data interfaces, the Mark 5C will accept data from a standard 10 Gigabit Ethernet connection and be compatible with the emerging VLBI Data Interchange Format (VDIF) standard. Data sources for the Mark 5C system will be based on new digital backends now being developed, specifically the RDBE in the U.S. and the dBBC in Europe, as well as others. The Mark 5C system is being planned for use with the VLBI2010 system and will also be used by NRAO as part of the VLBA sensitivity upgrade program; it will also be available to the global VLBI community from Conduant. Mark 5C system specification and development is supported by Haystack Observatory, NRAO, and Conduant Corporation. Prototype Mark 5C systems are expected in early 2010.

Journal Article
TL;DR: In this article, GPS ionosphere maps were applied to a series of K and Q band VLBA astrometry sessions to try to eliminate a declination bias in estimated source positions.
Abstract: GPS TEC ionosphere maps were first applied to a series of K and Q band VLBA astrometry sessions to try to eliminate a declination bias in estimated source positions. Their usage has been expanded to calibrate X-band only VLBI observations as well. At K-band, approx.60% of the declination bias appears to be removed with the application of GPS ionosphere calibrations. At X-band however, it appears that up to 90% or more of the declination bias is removed, with a corresponding increase in RA and declination uncertainties of approx.0.5 mas. GPS ionosphere calibrations may be very useful for improving the estimated positions of the X-only and S-only sources in the VCS and RDV sessions.

Journal Article
TL;DR: This paper deals with a new project of the Russian VLBI Network dedicated for Universal Time determinations in quasi on-line mode, and variations of constructing receiving devices, digital data acquisition system, and phase calibration system are specially considered.
Abstract: This paper deals with a new project of the Russian VLBI Network dedicated for Universal Time determinations in quasi on-line mode. The basic principles of the network design and location of antennas are explained. Variants of constructing receiving devices, digital data acquisition system, and phase calibration system are specially considered. The frequency ranges and expected values of noise temperature are given.

Journal Article
TL;DR: Several components of the system will be improved for the prototype version of VLBI2010, including the feed, digital backend, and recorder, and these will be installed on a 12-m antenna that has been purchased and is ready for installation at the Goddard Space Flight Center outside of Washington, D.C.
Abstract: The next generation geodetic VLBI instrument is being developed with a goal of 1 mm position uncertainty in twenty-four hours. We have implemented a proof-of-concept system for a possible VLBI2010 signal chain, from feed through recorder, on the Westford (Massachusetts, USA) 18-m and MV-3 (Maryland, USA) 5-m antennas. Data have been obtained in four 512 MHz bands spanning the range 3.5 to 11 GHz to investigate the sensitivity and phase delay capability of the system. Using a new phase cal design, the phases have been aligned across four bands spanning 2 GHz with an RMS deviation of approximately eight degrees. Several components of the system will be improved for the prototype version of VLBI2010, including the feed, digital backend, and recorder, and these will be installed on a 12-m antenna that has been purchased and is ready for installation at the Goddard Space Flight Center outside of Washington, D.C., USA, site of the MV-3 antenna.

Journal Article
TL;DR: The results presented show that the phasecalibrated phase residuals from four 512 MHz bands spanning 2 GHz have an RMS phase variation of 8 o which corresponds to a delay uncertainty of 12 ps.
Abstract: For the past three years, the MIT Haystack Observatory and the broadband team have been developing a proof-of-concept broadband geodetic VLBI microwave (2-12 GHz) receiver. Also on-going at Haystack is the development of post-correlation processing needed to extract the geodetic observables. Using this processing, the first fully-phase-calibrated geodetic fringes have been produced from observations conducted with the proof-of-concept system. The results we present show that the phase-calibrated phase residuals from four 512 MHz bands spanning 2 GHz have an RMS phase variation of 8deg which corresponds to a delay uncertainty of 12 ps.