scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 2001"


Journal ArticleDOI
TL;DR: This article will argue that analyzing, designing, and implementing complex software systems as a collection of interacting, autonomous agents (that is, as a multiagent system) affords software engineers a number of significant advantages over contemporary methods.
Abstract: uilding high-quality, industrialstrength software is difficult. Indeed, it has been argued that developing such software in domains like telecommunications, industrial control, and business process management represents one of the most complex construction tasks humans undertake. Against this background, a wide range of software engineering paradigms have been devised. Each successive development either claims to make the engineering process easier or promises to extend the complexity of applications that can feasibly be built. Although evidence is emerging to support these claims, researchers continue to strive for more effective techniques. To this end, this article will argue that analyzing, designing, and implementing complex software systems as a collection of interacting, autonomous agents (that is, as a multiagent system [4]) affords software engineers a number of significant advantages over contemporary methods. This is not to say that agent-oriented software engineering represents a silver bullet [2]—there is no evidence to suggest it will represent an order of magnitude improvement in productivity. However, the increasing number of deployed applications [4, 8] bears testament to the potential advantages that accrue from such an approach. In seeking to demonstrate the efficacy of agent-oriented techniques, the most compelling argument would be to quantitatively show how their adoption improved the development process in a range of projects. However, such data is simply not available (as it is not for approaches like patterns, application frameworks, and componentware). Given this situation, the best that can be achieved is a qualitative justification of why agent-oriented approaches are well suited to engineering complex, distributed software systems.

1,295 citations


Journal ArticleDOI
TL;DR: Object-orientation brings together behavior and data into a single conceptual (and physical) entity.
Abstract: Computer science has experienced an evolution in programming languages and systems from the crude assembly and machine codes of the earliest computers through concepts such as formula translation, procedural programming, structured programming, functional programming, logic programming, and programming with abstract data types. Each of these steps in programming technology has advanced our ability to achieve clear separation of concerns at the source code level. Currently, the dominant programming paradigm is object-oriented programming - the idea that one builds a software system by decomposing a problem into objects and then writing the code of those objects. Such objects abstract together behavior and data into a single conceptual and physical entity. Object-orientation is reflected in the entire spectrum of current software development methodologies and tools - we have OO methodologies, analysis and design tools, and OO programming languages. Writing complex applications such as graphical user interfaces, operating systems, and distributed applications while maintaining comprehensible source code has been made possible with OOP. Success at developing simpler systems leads to aspirations for greater complexity. Object orientation is a clever idea, but has certain limitations. We are now seeing that many requirements do not decompose neatly into behavior centered on a single locus. Object technology has difficulty localizing concerns invoking global constraints and pandemic behaviors, appropriately segregating concerns, and applying domain-specific knowledge. Post-object programming (POP) mechanisms that look to increase the expressiveness of the OO paradigm are a fertile arena for current research. Examples of POP technologies include domain-specific languages, generative programming, generic programming, constraint languages, reflection and metaprogramming, feature-oriented development, views/viewpoints, and asynchronous message brokering. (Czarneclu and Eisenecker s book includes a good survey of many of these technologies).

592 citations


Journal ArticleDOI
TL;DR: Many software developers are attracted to the idea of AOP, but unsure about how to begin using the technology and what are the risks of using this new technology?
Abstract: Many software developers are attracted to the idea of AOP, but unsure about how to begin using the technology. They recognize the concept of crosscutting concerns, and know that they have had problems with the implementation of such concerns in the past. But there are many questions about how to adopt AOP into the development process. Common questions include: Can I use aspects in my existing code? What kinds of benefits can I expect to get? How do I find aspects? How steep is the learning curve for AOP? What are the risks of using this new technology?

591 citations


Journal ArticleDOI
TL;DR: The ever-expanding vari-ety of multiplayer games andsimulators demonstrates the poten-tial of CVEs in leisure and entertain-ment, the most notable examples being games such as Doom and Quake.
Abstract: CVEs can be seen as the result of aconvergence of research interestswithin the VR and computer-sup-ported cooperative work (CSCW)communities. Within the CVEs rep-resent a natural extension of currentcommercial single-user VR technol-ogy to support multiple participants.This extension allows better supportfor a range of applications. For exam-ple, the communication betweeninstructors and trainees central tosimulation and training applicationscan be supported. Visualizationsmay also be shared and discussed byteams of scientists or decision-mak-ers. Finally, the ever-expanding vari-ety of multiplayer games andsimulators demonstrates the poten-tial of CVEs in leisure and entertain-ment, the most notable examplesbeing games such as Doom andQuake. In all of these examples, par-ticipants are often physically dis-persed and communicating over acomputer network. Within the CSCW communityCVEs represent a technology thatmay support some aspects of socialinteraction not readily accommo-dated by technologies such as audioand videoconferencing and shared

439 citations


Journal ArticleDOI
TL;DR: Here, the Composition Filters (CF) model is presented and how it addresses evolving crosscutting concerns is illustrated.
Abstract: It has been demonstrated that certain design concerns, such as access control, synchronization, and object interactions cannot be expressed in current OO languages as a separate software module [4, 7]. These so-called crosscutting concerns generally result in implementations scattered over multiple operations. If a crosscutting concern cannot be treated as a single module, its adaptability and reusability are likely to be reduced. A number of programming techniques have been proposed to express crosscutting concerns, for example, adaptive programming [9], AspectJ [8], Hyperspaces [10], and Composition Filters [1]. Here, we present the Composition Filters (CF) model and illustrate how it addresses evolving crosscutting concerns.

342 citations


Journal ArticleDOI
TL;DR: Visual data exploration seeks to integrate humans in the data exploration process, applying their perceptual abilities to the large data sets now available, and to present the data in some visual form, allowing data analysts to gain insight into it and draw conclusions, as well as interact with it.
Abstract: Data is often recorded, captured, and stored automatically via sensors and monitoring systems. Many of the simple transactions now part of our everyday lives, such as paying for food and clothes by credit card or using the telephone, are typically recorded for future reference by computers. Many parameters of each transaction are routinely captured, resulting in highly dimensional data. The data is collected because companies, including those engaged in some kind of e-commerce, view it as a source of potentially valuable information that, as a strategic asset, could provide a competitive advantage. But actually finding this valuable information is difficult. Today’s data management systems make it possible to view only small portions of it. If the data is presented in text form, the amount that can be displayed amounts to only about 100 data items—a drop in the ocean when dealing with data sets containing millions of data items. Lacking the ability to adequately explore the large amounts being collected, and despite its potential usefulness, the data becomes useless and the databases data dumps. Visual data exploration, which aims to provide insight by visualizing the data, and information visualization techniques (such as distorted overview displays and dense pixel displays) can help solve this problem. Effective data mining depends on having a human in the data exploration process while combining this person’s flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today’s computers. Visual data exploration seeks to integrate humans in the data exploration process, applying their perceptual abilities to the large data sets now available. The basic idea is to present the data in some visual form, allowing data analysts to gain insight into it and draw conclusions, as well as interact with it. The visual representation of the data reduces the cognitive work needed to perform certain tasks. Visual data mining techniques have proved their value in exploratory data analysis; they also have great Computer systems today store vast amounts of data. Researchers, including those working on the “How Much Information?” project at the University of California, Berkeley, recently estimated, about 1 exabyte (1 million terabytes) of data is generated annually worldwide, including 99.997% available only in digital form. This worldwide data deluge means that in the next three years, more data will be generated than during all previous human history. Visual Exploration of Large Data Sets

328 citations


Journal ArticleDOI
TL;DR: The ancient art of storytelling and its adaptation in film and video can now be used to efficiently convey information in the authors' increasingly computerized world.
Abstract: A well-told story conveys great quantities of information in relatively few words in a format that is easily assimilated by the listener or viewer. People usually find it easier to understand information integrated into stories than information spelled out in serial lists (such as bulleted items in an overhead slide). Stories are also just more compelling. For example, despite its sketchiness, the story fragment in Figure 1 is loaded with information, following an analysis similar to that of John Thomas of IBM Research [5]. We find that Jim uses technology (a pager and the Internet) and is dedicated to his job. Many other pieces of information can be deduced about Jim and his work, as well as about his relationships with his coworkers, as noted in the right side of the figure. The story does not express all this information explicitly; some is only implied; for example, we can surmise that Jim is probably not at the gym and his attendance at the meeting is important to his boss and coworkers, as well as to his company’s business performance. As in most stories, this one involves uncerFor as long as people have been around, they have used stories to convey information, cultural values, and experiences. Since the invention of writing and the printing press until today, technology and culture have constantly provided new and increasingly sophisticated means to tell stories. More recently, technology, entertainment, and art have converged in the computer. The ancient art of storytelling and its adaptation in film and video can now be used to efficiently convey information in our increasingly computerized world. What Storytelling Can Do for Information Visualization

317 citations


Journal ArticleDOI
TL;DR: Given the significant costs involved in putting technology into schools and given the potential to harm young children, one prominent report calls for “An immediate moratorium on the further introduction of computers in ... elementary education”.
Abstract: Given the significant costs involved in putting technology into schools and given the potential to harm young children, one prominent report calls for “An immediate moratorium on the further introduction of computers in ... elementary education” [3]. Rather than getting defensive, gesticulating wildly, and dragging out that favorite story about how one child we personally know accomplished an amazing thing with a computer, it’s time to come out of the closet: children simply aren’t using computers in K–12 schools and that’s why there isn’t substantial data on the impact of computers in K–12 education. Let’s look at some basic statistics about availability and use of computers in K–12:

312 citations


Journal ArticleDOI
TL;DR: Using traditional and emerging access control approaches to develop secure applications for the Web with a focus on mobile devices.
Abstract: Using traditional and emerging access control approaches to develop secure applications for the Web.

307 citations


Journal ArticleDOI
TL;DR: While email will continue to dominate wireless applications, innovative online applications that, for instance, use location reference information of end users will drive new areas of mobile e-business growth.
Abstract: Most current e-commerce transactions are conducted by users in fixed locations using workstations and personal computers. Soon, we expect a significant portion of e-commerce will take place via wireless, Internet-enabled devices such as cellular phones and personal digital assistants. Wireless devices provide users mobility to research, communicate, and purchase goods and services from anywhere at anytime without being tethered to the desktop. Using the Internet from wireless devices has come to be known as mobile e-commerce, or simply “m-commerce,” and encompasses many more activities than merely online purchasing. One of the major wireless applications is Web access for retrieval of real-time information such as weather reports, sport scores, flight and reservation information, navigational maps, and stock quotes. While email will continue to dominate wireless applications, innovative online applications that, for instance, use location reference information of end users will drive new areas of mobile e-business growth. Strategy Analytics, among other market research groups, predicts that by 2004 there will be over one TE R Y M IU R A

271 citations


Journal ArticleDOI
Harold Ossher1, Peri Tarr1
TL;DR: Simplifying development, evolution, and integration of Java software using Hyper/J.
Abstract: S Separation of concerns [11] is a key guiding principle of software engineering. It refers to the ability to identify, encapsulate, and manipulate only those parts of software that are relevant to a particular concept, goal, or purpose. Concerns are the primary criteria for decomposing software into smaller, more manageable and comprehensible parts that have meaning to a software engineer. As software becomes more pervasive and its life expectancy increases, it becomes subject to greater pressures to integrate and interact with other pieces of software—often off-the-shelf software that has been written by entirely separate organizations—and to evolve and adapt to uses in new and unanticipated contexts, both technological (new hardware, operating systems, software configurations, standards) and sociological (new domains, business practices, processes and regulations, users). Simplifying development, evolution, and integration of Java software using Hyper/J.

Journal ArticleDOI
TL;DR: Intelligent business agents are the next higher level of abstraction in model-based solutions to business-to-business e-commerce applications and can help address serious technological challenges such as concerns about effective searching, security and privacy, and effective use of interoperability between diverse business processes and diverse information required to achieve tele-cooperation and global e- commerce.
Abstract: he rapid growth of the Internet, networking systems such as electronic data interchange systems, and the penetration of ISDNbased applications are stimulating an ever-increasing number of businesses to participate in e-commerce worldwide. For example, businesses use the Web to improve internal communication, help manage supply chains, conduct technical and market research, and locate potential partners. Moreover, innovative enterprises with good partner relationships are beginning to capitalize on the enormous potential of new global networking possibilities and are beginning to share sales data, customer buying patterns, and future plans with their suppliers and customers. One of the key characteristics of the e-business world is that companies will inevitably move more and more into a customer-centric paradigm in order to increase competitiveness. Customer behavior cannot be accurately predicted using traditional analytic methods like forecasting or budgeting. Instead, companies seeking a competitive edge will investigate other kinds of analytical methods based on, for example, heuristics and AI techniques. Intelligent business agents are the next higher level of abstraction in model-based solutions to business-to-business e-commerce applications. By building on the distributed object foundation, agent technology can help bridge the remaining gap between flexible design and usable applications. Agents support a natural merging of object orientation and knowledge-based technologies. They can facilitate the incorporation of reasoning capabilities within the business application logic (for example, encapsulation of business rules within agents or modeled organizations). They permit the inclusion of learning and selfimprovement capabilities at both infrastructure (adaptive routing) and application (adaptive user interfaces) levels. Unlike objects, business agents can participate in high-level (task-oriented) dialogues through the use of interaction protocols in conjunction with built-in organizational knowledge. In many cases, the need for communication is greatly reduced, as within these high-level dialogues, complex packets of procedural and declarative knowledge as well as state information may be exchanged in the form of mobile objects. In addition, agent technology can help address serious technological challenges such as concerns about effective searching, security and privacy, and effective use of interoperability between diverse business processes and diverse information required to achieve tele-cooperation and global e-commerce. The opportunities for using intelligent agents in an e-business application are enormous. For example, they can be used for real-time pricing and auctioning, involving different parties in a supply-chain network. Suppliers can present their products on the Web and

Journal ArticleDOI
TL;DR: Aspect-oriented programming is a new evolution in the line of technology for separation of concerns technology that allows design and code to be structured to reflect the way developers want to think about the system.
Abstract: Aspect-oriented programming is a new evolution in the line of technology for separation of concerns technology that allows design and code to be structured to reflect the way developers want to think about the system. AOP builds on existing technologies and provides additional mechanisms that make it possible to affect the implementation of systems in a crosscutting way.

Journal ArticleDOI
TL;DR: Pebbles spreads computing functions and their related user interfaces across all computing and input/output devices available to a particular user or group of users, even when they're communicating wirelessly.
Abstract: USING HANDHELDS AND PCS TOGETHER Pebbles spreads computing functions and their related user interfaces across all computing and input/output devices available to a particular user or group of users, even when they're communicating wirelessly.

Journal ArticleDOI
TL;DR: The results reported here demonstrate that female underrepresentation in computer science could be avoided and demonstrates that many women succeed as computer scientists in certain times and settings.
Abstract: A lthough many computer professionals believe that inherent or deeply ingrained gender differences make women less suited to the study and practice of computer science [5, 9], the results reported here demonstrate that female underrepresentation in computer science could be avoided. Women can and do succeed in computer science (CS) when conditions do not deter them. The variation that occurs in women's participation rates demonstrates that many women succeed as computer scientists in certain times and settings. Conditions affecting female retention in undergraduate computer science are identified in this article. 1 Evidence that women's success in computer science varies over time was provided in an article by Camp that appeared in Communications in 1997 [2]. In this article, Camp documented the rise and fall in the female proportion of computer science Bachelor's degrees between 1981 and 1994. Camp also noted that this variation was affected by the type of college (engineering/nonengineering) in which a CS department was located. Figure 1 expands Cam-p's timeframe to the most recent available data and reconfirms that women's proportion of CS Bache-lor's degrees waxes and wanes. As Figure 1 shows, women comprised 14% of CS Bachelor's degrees in the U.S. in 1971; this percentage rose to 37% by 1984, and then dropped 10 percentage points over the subsequent 13 years. These temporal changes in female representation are not statistical phantoms that can be easily explained away. In particular, they are not attributable to general trends in female educational attain-ment—women's proportion of all Bachelor's degrees rose steadily from 46% to 56% during this period. Furthermore, women's proportion of non-CS scientific and technical disciplines also rose during this period [7]. The temporal changes in female representation were also not attributable to the effects of newly formed CS departments—a similar rising and What causes women to discontinue pursuing the undergraduate computer science major at higher rates than men? Toward Improving Fem the Computer Sc 1 In the U.S., 69% of the female college entrants who intended to major in computer science in 1987 switched to some other major by 1991 [10]. This female switching rate compares very unfavorably with the male switching rate of 46%.

Journal ArticleDOI
TL;DR: This work seeks to identify if IS journals are perceived differently across regions of the world, and provides three measures of journal ranking in order to generate a richer picture of how quality may actually be interpreted.
Abstract: A s the pressure for scientifically rigorous and relevant research mounts [10], authors need to identify those outlets with the highest visibility for their work and those publications in which readers seek the best sources for informed IS research. Increasingly, academics and institutions all over the world place significant importance to journal rankings for promotion, tenure, and assessment purposes. In particular, U.S. college and university faculty promotion and tenure decisions are decided on the basis of academic research output in top-tier journals. Furthermore, the Research Assessment Exercise in the U.K. ranks university departments for the purpose of distributing government research funds. This process measures research excellence by assessing where faculty publishes, taking into account the respective journal standing. There is evidence to suggest that universities on both sides of the Atlantic increasingly use journal lists for internal assessment and promotion purposes. We report survey results that contribute to the general interest in journal ranking. Prior studies of IS journal rankings have been limited to the North American (Canada and the U.S.) IS community. However, there is substantial research evidence that academics from different regions of the world have different research approaches [1]. Similarly, IS practitioners from different countries assign different priorities to IS management issues [8]. Taking this perspective into consideration, we seek to identify if IS journals are perceived differently across regions of the world. We also provide three measures of journal ranking in order to generate a richer picture of how quality may actually be interpreted. Moreover, our questionnaire design addresses an important issue raised in prior research. Hardgrave and Walstrom [3] rightly argue their data cannot make the common distinction between A-level and B-level journals. We address this issue head-on by asking respondents to make the classification themselves. Finally, in an effort to be as representative as possible we were able to attract nearly 1,000 responses, by far the largest sample in this type of research. Data collection was carried out through an online questionnaire. We targeted members of the ISWorld mailing list and the IS Faculty Directory on www.isworld.org. These sources represent the most complete and authoritative community of " information management scholars and practitioners. " 1 Of the 3,855 email invitations that we sent out, 1,094 bounced (2,761 email recipients) and 1,010 responses were collected of which 979 were usable. This represents a 35.45% usable response rate. By region, we received 605 responses from North …



Journal ArticleDOI
TL;DR: The initial studies of property tax payments indicate that Webenabled payment reduces processing costs from more than $5 to around 22 cents per transaction, and the potential savings of e-democracy could be as much as $110 billion.
Abstract: principle of democracy—efficiency. Efficient government keeps at bay the populists and demagogues who appeal to those who want the trains to run on time. The goal of electronic democracy is to deploy information technology to improve the effectiveness and efficiency of democracy [8]. On the efficiency side, the intention is to increase the convenience and timeliness of citizen/government interactions and reduce their cost. Information will be more readily available and transaction costs significantly reduced. Our initial studies of property tax payments indicate that Webenabled payment reduces processing costs from more than $5 to around 22 cents per transaction. The potential savings of e-democracy could be as much as $110 billion

Journal ArticleDOI
TL;DR: If software is not a product but a medium for storing knowledge, then software development is not an product-producing activity, it is a knowledge-acquiring activity.
Abstract: PA U L ZW O LA K In my first column (Aug. 2000, p. 19), I argued that software is not a product, but rather a medium for the storage of knowledge. In fact, it is the fifth such medium that has existed since the beginning of time. The other knowledge storage media being, in historical order: DNA, brains, hardware, and books. The reason software has become the storage medium of choice is that knowledge in software has been made active. It has escaped the confinement and volatility of knowledge in brains; it avoids the passivity of knowledge in books; it has the flexibility and speed of change missing from knowledge in DNA or hardware. If software is not a product, then what is the product of our efforts to produce software? It is the knowledge contained in the software. It’s rather easy to produce software. It’s much more difficult to produce software that works, because we have to understand the meaning of “works.” It’s easy to produce simple software because it doesn’t contain much knowledge. Software is easier to produce using an application generator, because much of the knowledge is already stored in the application generator. Software is easy to produce if I’ve already produced this type of system before, because I have already obtained the necessary knowledge. So, the hard part of building systems is not building them, it’s knowing what to build—it’s in acquiring the necessary knowledge. This leads us to another observation: if software is not a product but a medium for storing knowledge, then software development is not a product-producing activity, it is a knowledge-acquiring activity.

Journal ArticleDOI
TL;DR: A An operation in an object-oriented program often involves several different collaborating classes, so too much information about the structure of the classes needs to be tangled into each such method, making it difficult to adapt to changes in the class structure.
Abstract: A An operation in an object-oriented program often involves several different collaborating classes. There are usually two ways to implement such an operation: either put the whole operation into one method on one of the classes, or divide the operation into methods on each of the classes involved. The drawback of the former is that too much information about the structure of the classes (is-a and has-a relationships) needs to be tangled into each such method, making it difficult to adapt to changes in the class structure. However, the latter scatters the operation across multiple classes, making it difficult to adapt when the operation changes.

Journal ArticleDOI
TL;DR: In 1994, a survey by consulting company KPMG of 120 organizations in the UK found 62% of them had experienced a runaway project, and today, your shelves are filled with books covering numerous examples of runaway software projects.
Abstract: D espite advances in software engineering, project failure temains a critical challenge for the software development communicy. According to the Standish Group's 1998 survey, only 26% of such projects wete delivered on time, on budget, and with promised functionality, wasting billions of dollars annually; 46% wete completed over budget and behind schedule, with fewer functions and features than orig nally specified, and could therefore be classified as runaway projects [10]. In 1994, a sur-^ vey by consulting company KPMG of 120 organizations in the UK found 62% of them had experienced a runaway project [7]. Today, you can fdl your shelves with books covering numerous examples of runaway software projects [3].

Journal ArticleDOI
TL;DR: From an enterprise-centric perspective, applications such as customer relationship management systems (CRM) and supplier management systems are considered extensions of the enterprise systems, or parts of the extended enterprise resource planning (EERP) systems.
Abstract: S upplying value to the consumer, that is, goods and services, is the essence of business. A supply chain is a network of organizations and their associated activities that work together, usually in a sequential manner, to produce value for the consumer. Customer-facing firms at the retail level, whether large department stores, automobile dealer-ships, or fast-food franchises, are only the tip of the iceberg. Behind them exist entire networks of manufacturers and distributors, transportation and logistics firms, banks, insurance companies, brokers , warehouses and freight-forwarders, all directly or indirectly attempting to make sure the right goods and services are available at the right price, where and when the customers want them. Having delivered the goods or services, the chain does not terminate. At the front end, through delivery , installation, customer education, help desks, maintenance, and repair, the goods or services are made useful to the customer. At the end of the product life, reverse logistics ensures that used and discarded products are disassembled, brought back, and where possible, recycled and sent back into the supply network. The scope of the supply chain, thus, extends from \" dirt to dirt, \" from the upstream sources of supply , down to the point of consumption, and finally retirement and recycling. Conventional strategic thinking has focused on individual firms as the competitive unit in any industry. For example, supermarkets compete against supermarkets, automobile dealers compete against automobile dealers, and buggy whip manufacturers compete against other buggy whip manufacturers. In these scenarios, enterprise-focused systems such as enterprise resource planning (ERP) systems, executive information systems, and decision support systems, become key to achieving cost efficiencies and organizational effectiveness through intraorganizational process integration [3]. Moreover, from an enterprise-centric perspective, applications such as customer relationship management systems (CRM) and supplier management systems are considered extensions of the enterprise systems, or parts of the extended enterprise resource planning (EERP) systems. While firms still continue to compete individually, the example of the buggy whip manufacturer clearly shows that when an entire supply chain of buggies, buggy whips, stables, and roadside carriage-hostelries loses its competitive battle against the supply chain arranged around the automobile, the buggy whip manufacturer, however efficient in producing products of fine quality, inevitably rides into oblivion. Consequently, the competitive success of a firm is no longer a function of its individual efforts—it depends, to a great extent, on how well the entire supply chain, as …

Journal ArticleDOI
Line Dubé1, Guy Paré1
TL;DR: Recent interviews with GVT leaders and members offer critical advice from the trenches regarding the challenges and coping strategies for collaborating on a global scale.
Abstract: Recent interviews with GVT leaders and members offer criticaladvice from the trenches regarding the challenges and copingstrategies for collaborating on a global scale.


Journal ArticleDOI
TL;DR: Whenever the description of a software artifact exhibits crosscutting structure, the principles of modularity espoused by AO offer a powerful technology for supporting separation of concerns, which is found to be true especially in the area of domain-specific modeling.
Abstract: An Aspect-Oriented (AO) approach can be beneficial at different stages of the software lifecycle and at various levels of abstraction. Whenever the description of a software artifact exhibits crosscutting structure, the principles of modularity espoused by AO offer a powerful technology for supporting separation of concerns. We have found this to be true especially in the area of domain-specific modeling [3].

Journal ArticleDOI
TL;DR: Surprisingly, it is found that there is order to the apparent arbitrariness of the World Wide Web growth, as there are many small elements contained within the web, but few large ones.
Abstract: The past decade has witnessed the birth and explosive growth of the World Wide Web, both in terms of content and user population. Figure 1 shows the exponetial growth in the number of Web servers. The number of users online has been growing exponentially as well. Whereas in 1996 there were 61 million users, at the close of 1998 over 147 million people had internet access worldwide. In the year 2000, the number of internet users more than doubled again to 400 million[1]. With its remarkable growth, the Web has popularized electronic commerce, and as a result an increasing segment of the world's population conducts commercial transactions online. From its very onset, the Web has demonstrated a tremendous variety in the size of its features. Surprisingly, we found out that there is order to the apparent arbitrariness of its growth. One observed pattern is that there are many small elements contained within the web, but few large ones. A few sites consist of millions of pages, but millions of sites only contain a handful of pages. Few sites contain millions of links, but many sites have one or two. Millions of users flock to a few select sites, giving little attention to millions of others. This diversity can expressed in mathematical fashion as a distribution of a particular form, called a power law, meaning that the probability of attaining a certain size x is proportional to 1/x to a power τ, where τ is greater than or equal to1. When a distribution of some property has a power law form, the system looks the same at all length scales. What this means is that if one were to look at the distribution of site sizes for one arbitrary range, say just sites which have between 10,000 and 20,000 pages, it would look the same as for a different range, say 10 to 100 pages. In other words, zooming in or out in the distribution, one keeps obtaining the same result. It also means that if one can determine the distribution of pages per site for a range of pages, one can then predict what the distribution will be for another range. Power laws also imply that the average behavior of the system is not typical. A typical size is one that is encountered most frequently, while the average is the sum of all the sizes, divided by the number of sites. If one were to select a group of sites at random and count the number of pages in each one, the majority of the sites would be smaller than average. This discrepancy between average and typical behavior is due to the skew of the distribution. Equally interesting, power law distributions have very long tails, which means that there is a finite probability of finding sites extremely large compared to the average.


Journal ArticleDOI
TL;DR: This work views software product development from the perspective of a software consumer to reveal how the software product market is changing ISD.
Abstract: I nformation systems development (ISD) is best understood as a market phenomenon, a perspective highlighting how software is developed and who performs that development and sells the related products and how they are introduced to users. Here I emphasize the increasing specialization of software producers (developers and vendors) as distinct from softwareconsuming organizations. I also contrast software product development with ISD, emphasizing my view of the worldwide software product market, exploring important implications for consumers. Worldwide software sales rose 280% from 1986 to 1995 [2] and are expected to double again by 2002, fueling market growth, along with the market capitalizations of Microsoft, Oracle, SAP, and other major vendors. Software comes as either packaged (commercial, or shrink-wrap) or made-to-order (custom, or one-off ). I distinguish between software producers (vendors of packaged software and software houses) developing, manufacturing, and distributing software and software-consuming organizations acquiring (buying) and using it. Software producers include such huge organizations as EDS, IBM, Lockheed-Martin, Microsoft, Oracle, SAP, as well as thousands of smaller firms. While a software consumer can be an individual or an organization, I focus on organization-level consumption. Thus, when Microsoft buys a license for, say, SAP R/3 products for managing financial operations and product inventories, it becomes a software consumer. Differentiating between producer and consumer points up that the boundaries between these roles are increasingly organizational in nature. An underlying assumption is that the changes in development are less dramatic than the changes in acquiring and installing software-based information systems [3, 4, 6]. I thus view these issues from the perspective of a software consumer to reveal how the software product market is changing ISD.