scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 2000"


Journal ArticleDOI
TL;DR: The authors present a configurable architecture that enables these opportunities to be efficiently realized in silicon and believe that this energy-conscious system design and implementation methodology will lead to radio nodes that are two orders of magnitude more efficient than existing solutions.
Abstract: Technology advances have made it conceivable to build and deploy dense wireless networks of heterogeneous nodes collecting and disseminating wide ranges of environmental data. Applications of such sensor and monitoring networks include smart homes equipped with security, identification, and personalization systems; intelligent assembly systems; warehouse inventory control; interactive learning toys; and disaster mitigation. The opportunities emerging from this technology give rise to new definitions of distributed computing and the user interface. Crucial to the success of these ubiquitous networks is the availability of small, lightweight, low-cost network elements, which the authors call PicoNodes. The authors present a configurable architecture that enables these opportunities to be efficiently realized in silicon. They believe that this energy-conscious system design and implementation methodology will lead to radio nodes that are two orders of magnitude more efficient than existing solutions.

1,139 citations


Journal ArticleDOI
TL;DR: CPU2000 as mentioned in this paper is a new CPU benchmark suite with 19 applications that have never before been in a SPEC CPU suite, including high-performance numeric computing, Web servers, and graphical subsystems.
Abstract: As computers and software have become more powerful, it seems almost human nature to want the biggest and fastest toy you can afford. But how do you know if your toy is tops? Even if your application never does any I/O, it's not just the speed of the CPU that dictates performance. Cache, main memory, and compilers also play a role. Software applications also have differing performance requirements. So whom do you trust to provide this information? The Standard Performance Evaluation Corporation (SPEC) is a nonprofit consortium whose members include hardware vendors, software vendors, universities, customers, and consultants. SPEC's mission is to develop technically credible and objective component- and system-level benchmarks for multiple operating systems and environments, including high-performance numeric computing, Web servers, and graphical subsystems. On 30 June 2000, SPEC retired the CPU95 benchmark suite. Its replacement is CPU2000, a new CPU benchmark suite with 19 applications that have never before been in a SPEC CPU suite. The article discusses how SPEC developed this benchmark suite and what the benchmarks do.

877 citations


Journal ArticleDOI
TL;DR: The authors believe that the answer lies in the use and reuse of software components that work within an explicit software architecture, and the Koala model, a component-oriented approach, is their way of handling the diversity of software in consumer electronics.
Abstract: Most consumer electronics today contain embedded software. In the early days, developing CE software presented relatively minor challenges, but in the past several years three significant problems have arisen: size and complexity of the software in individual products; the increasing diversity of products and their software; and the need for decreased development time. The question of handling diversity and complexity in embedded software at an increasing production speed becomes an urgent one. The authors present their belief that the answer lies not in hiring more software engineers. They are not readily available, and even if they were, experience shows that larger projects induce larger lead times and often result in greater complexity. Instead, they believe that the answer lies in the use and reuse of software components that work within an explicit software architecture. The Koala model, a component-oriented approach detailed in this article, is their way of handling the diversity of software in consumer electronics. Used for embedded software in TV sets, it allows late binding of reusable components with no additional overhead.

795 citations


Journal ArticleDOI
TL;DR: The Virtual Inter Network Testbed (VINT) project as discussed by the authors has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.
Abstract: Network researchers must test Internet protocols under varied conditions to determine whether they are robust and reliable. The paper discusses the Virtual Inter Network Testbed (VINT) project which has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.

784 citations


Journal ArticleDOI
TL;DR: The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time.
Abstract: With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time.

578 citations


Journal ArticleDOI
TL;DR: To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it, investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications.
Abstract: Various projects and products have been built using off-the-shelf field-programmable gate arrays (FPGAs) as computation accelerators for specific tasks. Such systems typically connect one or more FPGAs to the host computer via an I/O bus. Some have shown remarkable speedups, albeit limited to specific application domains. Many factors limit the general usefulness of such systems. Long reconfiguration times prevent the acceleration of applications that spread their time over many different tasks. Low-bandwidth paths for data transfer limit the usefulness of such systems to tasks that have a high computation-to-memory-bandwidth ratio. In addition, standard FPGA tools require hardware design expertise which is beyond the knowledge of most programmers. To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it. They are also investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications. They present their results in this article.

478 citations


Journal ArticleDOI
TL;DR: The author attempts to answer questions as to why FPGAs have been so much more successful than their microprocessor and DSP counterparts and how configurable computing fits into the arsenal of structures used to build general, programmable computing platforms.
Abstract: More and more, field-programmable gate arrays (FPGAs) are accelerating computing applications. The absolute performance achieved by these configurable machines has been impressive-often one to two orders of magnitude greater than processor-based alternatives. Configurable computing is one of the fastest, most economical ways to solve problems such as RSA (Rivest-Shamir-Adelman) decryption, DNA sequence matching, signal processing, emulation, and cryptographic attacks. But questions remain as to why FPGAs have been so much more successful than their microprocessor and DSP counterparts. Do FPGA architectures have inherent advantages? Or are these examples just flukes of technology and market pricing? Will advantages increase, decrease, or remain the same as technology advances? Is there some generalization that accounts for the advantages in these cases? The author attempts to answer these questions and to see how configurable computing fits into the arsenal of structures used to build general, programmable computing platforms.

404 citations


Journal ArticleDOI

388 citations


Journal ArticleDOI
TL;DR: This article goes into detail about the BioID system functions, explaining the data acquisition and preprocessing techniques for voice, facial, and lip imagery data and the classification principles used for optical features and the sensor fusion options.
Abstract: Biometric identification systems, which use physical features to check a person's identity, ensure much greater security than password and number systems. Biometric features such as the face or a fingerprint can be stored on a microchip in a credit card, for example. A single feature, however, sometimes fails to be exact enough for identification. Another disadvantage of using only one feature is that the chosen feature is not always readable. Dialog Communication Systems (DCS AG) developed BioID, a multimodal identification system that uses three different features-face, voice, and lip movement-to identify people. With its three modalities, BioID achieves much greater accuracy than single-feature systems. Even if one modality is somehow disturbed-for example, if a noisy environment drowns out the voice-the ether two modalities still lead to an accurate identification. This article goes into detail about the system functions, explaining the data acquisition and preprocessing techniques for voice, facial, and lip imagery data. The authors also explain the classification principles used for optical features and the sensor fusion options (the combinations of the three results-face, voice, lip movement-to obtain varying levels of security).

386 citations


Journal ArticleDOI
TL;DR: The author comparatively analyzes 80 implementations of the phone-code program in seven different languages, investigating several aspects of each language, including program length, programming effort, runtime efficiency, memory consumption, and reliability.
Abstract: Often heated, debates regarding different programming languages' effectiveness remain inconclusive because of scarce data and a lack of direct comparisons. The author addresses that challenge, comparatively analyzing 80 implementations of the phone-code program in seven different languages (C, C++, Java, Perl, Python, Rexx and Tcl). Further, for each language, the author analyzes several separate implementations by different programmers. The comparison investigates several aspects of each language, including program length, programming effort, runtime efficiency, memory consumption, and reliability. The author uses comparisons to present insight into program language performance.

359 citations


Journal ArticleDOI
TL;DR: Of the biometrics that give the user some control over data acquisition, voice, face, and fingerprint systems have undergone the most study and testing-and therefore occupy the bulk of this discussion.
Abstract: On the basis of media hype alone, you might conclude that biometric passwords will soon replace their alphanumeric counterparts with versions that cannot be stolen, forgotten, lost, or given to another person. But what if the actual performance of these systems falls short of the estimates? The authors designed this article to provide sufficient information to know what questions to ask when evaluating a biometric system, and to assist in determining whether performance levels meet the requirements of an application. For example, a low-performance biometric is probably sufficient for reducing-as opposed to eliminating-fraud. Likewise, completely replacing an existing security system with a biometric-based one may require a high-performance biometric system, or the required performance may be beyond what current technology can provide. Of the biometrics that give the user some control over data acquisition, voice, face, and fingerprint systems have undergone the most study and testing-and therefore occupy the bulk of this discussion. This article also covers the tools and techniques of biometric testing.

Journal ArticleDOI
TL;DR: Researchers are investigating summarization tools and methods that automatically extract or abstract content from a range of information sources, including multimedia, looking at approaches which roughly fall into two categories: knowledge-poor and knowledge-rich.
Abstract: Summarization, the art of abstracting key content from one or more information sources, has become an integral part of everyday life. Researchers are investigating summarization tools and methods that automatically extract or abstract content from a range of information sources, including multimedia. Researchers are looking at approaches which roughly fall into two categories. Knowledge-poor approaches rely on not having to add new rules for each new application domain or language. Knowledge-rich approaches assume that if you grasp the meaning of the text, you can reduce it more effectively, thus yielding a better summary. Some approaches use a hybrid. In both methods, the main constraint is the compression requirement. High reduction rates pose a challenge because they are hard to attain without a reasonable amount of background knowledge. Another challenge is how to evaluate summarizers. If you are to trust that the summary is indeed a reliable substitute for the source, you must be confident that it does in fact reflect what is relevant in that source. Hence, methods for creating and evaluating summaries must complement each other.

Journal ArticleDOI
TL;DR: The Grid Security Infrastructure (GSI) offers secure single sign-ons and preserves site control over access policies and local security, and provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.
Abstract: Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks, resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine whether the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish and change resource-sharing arrangements. Nevertheless, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure (I. Foster et al., 1998). GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications. Dozens of supercomputers and storage systems already use GSI, a level of acceptance reached by few other security infrastructures.

Journal ArticleDOI
J. Agre1, L. Clare1
TL;DR: Several technical challenges that must be overcome are discussed to fully realize the viability of the DSN concept in realistic application scenarios.
Abstract: Distributed sensor networks (DSNs) consisting of many small, low-cost, spatially dispersed, communicating nodes have been proposed for many applications, such as area surveillance and environmental monitoring. Trends in integrated electronics, such as better performance-to-cost ratios, low-power radios, and microelectromechanical systems (MEMS) sensors, now allow the construction of sensor nodes with signal processing, wireless communications, power sources and synchronization, all packaged into inexpensive miniature devices. If these devices can be easily deployed and self-integrated into a system, they promise great benefits in providing real-time information about environmental conditions. Intelligent sensor nodes function much like individual ants that, when formed into a network, cooperatively accomplish complex tasks and provide capabilities greater than the sum of the individual parts. The paper discusses several technical challenges that must be overcome to fully realize the viability of the DSN concept in realistic application scenarios.

Journal ArticleDOI
TL;DR: The Real-Time CORBA specification includes features to manage CPU, network and memory resources and helps decrease the cycle time and effort required to develop high-quality systems by composing applications using reusable software component services rather than building them entirely from scratch.
Abstract: A growing class of real-time systems require end-to-end support for various quality-of-service (QoS) aspects, including bandwidth, latency, jitter and dependability. Applications include command and control, manufacturing process control, videoconferencing, large-scale distributed interactive simulation, and testbeam data acquisition. These systems require support for stringent QoS requirements. To meet this challenge, developers are turning to distributed object computing middleware, such as the Common Object Request Broker Architecture, an Object Management Group (OMG) industry standard. In complex real-time systems, DOC middleware resides between applications and the underlying operating systems, protocol stacks and hardware. CORBA helps decrease the cycle time and effort required to develop high-quality systems by composing applications using reusable software component services rather than building them entirely from scratch. The Real-Time CORBA specification includes features to manage CPU, network and memory resources. The authors describe the key Real-Time CORBA features that they feel are the most relevant to researchers and developers of distributed real-time and embedded systems.

Journal ArticleDOI
TL;DR: Which educational topics have proved most important to them in their careers and to identify the topics for which their education or current knowledge could be improved are surveyed.
Abstract: Efforts to develop licensing requirements, curricula, or training programs for software professionals should consider the experience of the practitioners who actually perform the work. We surveyed software professionals representing a wide variety of industries, job functions, and countries to learn which educational topics have proved most important to them in their careers and to identify the topics for which their education or current knowledge could be improved.

Journal ArticleDOI
TL;DR: A life cycle model for system vulnerabilities is proposed, then applied to three case studies to reveal how systems often remain vulnerable long after security fixes are available.
Abstract: The authors propose a life cycle model for system vulnerabilities, then apply it to three case studies to reveal how systems often remain vulnerable long after security fixes are available. For each case, we provide background information about the vulnerability, such as how attackers exploited it and which systems were affected. We then tie the case to the life-cycle model by identifying the dates for each state within the model. Finally, we use a histogram of reported intrusions to show the life of the vulnerability, and we conclude with an analysis specific to the particular vulnerability.

Journal ArticleDOI
TL;DR: The authors describe the face recognition technology used, explaining the algorithms for face recognition as well as novel applications, such as behavior monitoring that assesses emotions based on facial expressions.
Abstract: Smart environments, wearable computers, and ubiquitous computing in general are the coming "fourth generation" of computing and information technology. But that technology will be a stillbirth without new interfaces for interaction, minus a keyboard or mouse. To win wide consumer acceptance, these interactions must be friendly and personalized; the next generation interfaces must recognize people in their immediate environment and, at a minimum, know who they are. In this article, the authors discuss face recognition technology, how it works, problems to be overcome, current technologies, and future developments and possible applications. Twenty years ago, the problem of face recognition was considered among the most difficult in artificial intelligence and computer vision. Today, however, there are several companies that sell commercial face recognition software that is capable of high-accuracy recognition with databases of more than 1,000 people. The authors describe the face recognition technology used, explaining the algorithms for face recognition as well as novel applications, such as behavior monitoring that assesses emotions based on facial expressions.

Journal ArticleDOI
TL;DR: The PASIS architecture flexibly and efficiently combines proven technologies for constructing information storage systems whose availability, confidentiality and integrity policies can survive component failures and malicious attacks.
Abstract: As society increasingly relies on digitally stored and accessed information, supporting the availability, integrity and confidentiality of this information is crucial. We need systems in which users can securely store critical information, ensuring that it persists, is continuously accessible, cannot be destroyed and is kept confidential. A survivable storage system would provide these guarantees over time and despite malicious compromises of storage node subsets. The PASIS architecture flexibly and efficiently combines proven technologies (decentralized storage system technologies, data redundancy and encoding, and dynamic self-maintenance) for constructing information storage systems whose availability, confidentiality and integrity policies can survive component failures and malicious attacks.

Journal ArticleDOI
TL;DR: A new view of the software life cycle is described in which maintenance is actually a series of distinct stages, each with different activities, tools, and business consequences, and both business and engineering can benefit from understanding these stages.
Abstract: Software engineers have traditionally considered any work after initial delivery as simply software maintenance. Some researchers have divided this work into various tasks, including making changes to functionality (perfective), changing the environment (adaptive), correcting errors (corrective), and making improvements to avoid future problems (preventive). However, many have considered maintenance basically uniform over time. Because software development has changed considerably since its early days, the authors believe this approach no longer suffices. They describe a new view of the software life cycle in which maintenance is actually a series of distinct stages, each with different activities, tools, and business consequences. While the industry still considers postdelivery work as simply software maintenance, the authors claim that the process actually falls into stages. They think both business and engineering can benefit from understanding these stages and their transitions.

Journal ArticleDOI
TL;DR: The answer to this question has given rise to some promising research angles, including novel ways to deal with concurrency and real time and methods for augmenting component interfaces to promote safety and adaptability.
Abstract: Once deemed too small and retro for research, embedded software has grown complex and pervasive enough to attract the attention of computer scientists. There are many research questions, but most center around one issue: how to reconcile a set of domain-specific requirements with the demands of interaction in the physical world. How do you adapt software abstractions designed merely to transform data to meet requirements like real-time constraints, concurrency, and stringent safety considerations? The answer to this question has given rise to some promising research angles, including novel ways to deal with concurrency and real time and methods for augmenting component interfaces to promote safety and adaptability.

Journal ArticleDOI
TL;DR: The authors explain their perspective based reading (PBR) technique that provides a set of procedures to help developers solve software requirements inspection problems and shows how PBR leads to improved defect detection rates for both individual reviewers and review teams working with unfamiliar application domains.
Abstract: Because defects constitute an unavoidable aspect of software development, discovering and removing them early is crucial. Overlooked defects (like faults in the software system requirements, design, or code) propagate to subsequent development phases where detecting and correcting them becomes more difficult. At best, developers will eventually catch the defects, but at the expense of schedule delays and additional product-development costs. At worst, the defects will remain, and customers will receive a faulty product. The authors explain their perspective based reading (PBR) technique that provides a set of procedures to help developers solve software requirements inspection problems. PBR reviewers stand in for specific stakeholders in the document to verify the quality of requirements specifications. The authors show how PBR leads to improved defect detection rates for both individual reviewers and review teams working with unfamiliar application domains.

Journal ArticleDOI
TL;DR: A taxonomy of possible coordination models for mobile-agent applications is proposed, and a case study helps show that the mobility of application components and the distribution area's breadth can create coordination problems different from those encountered in traditional distributed applications.
Abstract: Internet applications face challenges that mobile agents and the adoption of enhanced coordination models may overcome. Each year more applications shift from intranets to the Internet, and Internet-oriented applications become more popular. New design and programming paradigms call help harness the Web's potential. Traditional distributed applications assign a set of processes to a given execution environment that, acting as local-resource managers, cooperating a network-unaware fashion. In contrast, the mobile-agent paradigm defines applications as consisting of network-aware entities-agents-which can exhibit mobility by actively changing their execution environment, transferring themselves during execution. The authors propose a taxonomy of possible coordination models for mobile-agent applications, then use their taxonomy to survey and analyze resent mobile-agent coordination proposals. Their case study, which focuses on a Web-based information-retrieval application, helps show that the mobility of application components and the distribution area's breadth can create coordination problems different from those encountered in traditional distributed applications.

Journal ArticleDOI
TL;DR: The authors provide a mobile commerce framework to illustrate potential applications such as mobile inventory management, product location and search, proactive service management, and mobile entertainment and describe the wireless user and networking infrastructure, emerging W3C standards, and the open and global WAP specification.
Abstract: Electronic commerce continues to see phenomenal growth, but so far most e-commerce development involves wired infrastructures. The authors believe emerging wireless and mobile networks will provide new avenues for growth, creating new opportunities in mobile commerce. According to the GartnerGroup, a market research firm, by 2004 at least 40 percent of consumer-to-business e-commerce will come from smart phones using the wireless application protocol (WAP). Based on a study by the Wireless Data and Computing Service, a division of Strategy Analytics, the annual mobile commerce market may rise to $200 billion by 2004. The authors provide a mobile commerce framework to illustrate potential applications such as mobile inventory management, product location and search, proactive service management, and mobile entertainment. They also describe the wireless user and networking infrastructure, emerging W3C standards, and the open and global WAP specification.

Journal ArticleDOI
TL;DR: In this article, the authors argue that there is no way around biometrics-based identification if we insist on positive, reliable, and irrefutable identification, and they hope that a pervasive, accountable use of biometric technology will help establish a more open and fair society.
Abstract: It is too early to predict where, how, and in which form reliable biometric services will eventually be delivered. But it is certain that there is no way around biometrics-based identification if we insist on positive, reliable, and irrefutable identification. As fraud in our society grows, as the pressure to deliver inexpensive authentication services mounts, and as geographically mobile individuals increasingly need to establish their identity as strangers in remote communities, the problem of reliable personal identification becomes more and more difficult. To catapult biometric technology into the mainstream identification market, it is important to encourage its evaluation in realistic contexts, to facilitate its integration into end-to-end solutions, and to foster innovation of inexpensive and user-friendly implementations. We hope that a pervasive, accountable use of biometrics technology will help establish a more open and fair society.

Journal ArticleDOI
TL;DR: The authors assert that maintaining consistency at all times is counterproductive and advocate using inconsistency to highlight problem areas, using it as a tool to improve the development team's shared understanding, direct the process of requirements elicitation, and assist with verification and validation.
Abstract: Software engineers make use of many descriptions, including analysis models, specifications, designs, program code, user guides, test plans, change requests, style guides, schedules, and process models. But since different developers construct and update these descriptions at various times during development, maintaining consistency among descriptions presents several problems. Descriptions tend to vary considerably. Individual descriptions can be ill-formed or self-contradictory and frequently evolve throughout the life cycle at different rates. Also, checking the consistency of a large, arbitrary set of descriptions is computationally expensive. The authors assert that maintaining consistency at all times is counterproductive. In many cases, it may be desirable to tolerate or even encourage inconsistency to facilitate distributed team-work and prevent premature commitment to design decisions. They advocate using inconsistency to highlight problem areas, using it as a tool to improve the development team's shared understanding, direct the process of requirements elicitation, and assist with verification and validation.

Journal ArticleDOI
TL;DR: Cambridge University researchers developed middleware extensions that provide a flexible, scalable approach to distributed-application development that has provided support for emerging applications.
Abstract: In the late 1980s, software designers introduced middleware platforms to support distributed computing systems. Since then, the rapid evolution of technology has caused an explosion of distributed-processing requirements. Application developers now routinely expect to support multimedia systems and mobile users and computers. Timely response to asynchronous events is crucial to such applications, but current platforms do not adequately meet this need. Another need of existing and emerging applications is the secure interoperability of independent services in large-scale, widely distributed systems. Information systems serving organizations such as universities, hospitals, and government agencies require cross-domain interaction. To meet the needs of these applications, Cambridge University researchers developed middleware extensions that provide a flexible, scalable approach to distributed-application development. This article details the extensions they developed, explaining their distributed software approach and the support it has provided for emerging applications.

Journal ArticleDOI
TL;DR: The authors' technique combines several data compression features to provide economical storage, faster indexing, and accelerated searches.
Abstract: The continually growing Web challenges information retrieval systems to deliver data quickly The authors' technique combines several data compression features to provide economical storage, faster indexing, and accelerated searches

Journal ArticleDOI
TL;DR: What "current" means for Web search engines and how often they must reindex the Web to keep current with its changing pages and structure are quantified.
Abstract: Most information depreciates over time, so keeping Web pages current presents new design challenges. This article quantifies what "current" means for Web search engines and estimates how often they must reindex the Web to keep current with its changing pages and structure.

Journal ArticleDOI
TL;DR: This work describes software components as units of independent production, acquisition, and deployment that interact to form a functional system that software developers must address for component-based systems (CBS) to achieve their full potential.
Abstract: Developing and using various component forms as building blocks can significantly enhance software-based system development and use, which is why both the academic and commercial sectors have shown interest in component-based software development. Indeed, much effort has been devoted to defining and describing the terms and concepts involved. Briefly, we describe software components as units of independent production, acquisition, and deployment that interact to form a functional system. We identify a set of issues organized within an overall framework that software developers must address for component-based systems (CBS) to achieve their full potential.