scispace - formally typeset
Search or ask a question
Author

Bikram Sengupta

Other affiliations: Indian Institutes of Technology
Bio: Bikram Sengupta is an academic researcher from IBM. The author has contributed to research in topics: Service delivery framework & Transaction data. The author has an hindex of 16, co-authored 81 publications receiving 1109 citations. Previous affiliations of Bikram Sengupta include Indian Institutes of Technology.


Papers
More filters
Proceedings ArticleDOI
28 May 2006
TL;DR: This paper reports on a study of distributed software development that helped shape a research agenda for this field, and identifies four areas where important research questions need to be addressed to make distributed development more effective.
Abstract: In recent years, a number of business reasons have caused software development to become increasingly distributed. Remote development of software offers several advantages, but it is also fraught with challenges. In this paper, we report on our study of distributed software development that helped shape a research agenda for this field. Our study has identified four areas where important research questions need to be addressed to make distributed development more effective. These areas are: collaborative software tools, knowledge acquisition and management, testing in a distributed set-up and process and metrics issues. We present a brief summary of related research in each of these areas, and also outline open research issues.

236 citations

Journal ArticleDOI
TL;DR: A new collaborative tool named EGRET (eclipse-based global requirements tool) for distributed requirements management, which aims to alleviate many of the difficulties faced by geographically distributed practitioners without appropriate tool support.
Abstract: Requirements management, one of the most collaboration-intensive activities in software development, presents significant difficulties when stakeholders are distributed, as in today's global projects. Because of inadequate social contact, geographically distributed practitioners without appropriate tool support have trouble gaining a consistent understanding of requirements or managing requirement changes. We can alleviate many of these difficulties by integrating collaboration support in practitioners' work environments. With these needs in mind, we developed a new collaborative tool named EGRET (eclipse-based global requirements tool) for distributed requirements management

124 citations

Patent
14 Jul 2010
TL;DR: In this article, a timesheet assistant mines development items in a repository of a computer to form identified development items and then applies statistical analysis to tasks of the identified items using the effort indicators.
Abstract: A timesheet assistant mines development items in a repository of a computer to form identified development items. Development context information and effort indicators, associated with the identified development items, are extracted. Statistical analysis is applied to tasks of the identified development items using the effort indicators. Efforts expended on the tasks are predicted using historical data to create effort estimates. Developer reported efforts for the identified items are received, and a timesheet is generated using the development context information, the effort estimates and the developer reported effort. The timesheet is presented for review, verification, and approval.

57 citations

Proceedings ArticleDOI
12 Aug 2012
TL;DR: SmartDispatch is a learning-based tool that seeks to automate the process of ticket dispatch while maintaining high accuracy levels, and is able to suggest a short list of 3-5 groups that contain the correct resolution group with a high probability.
Abstract: In an IT service delivery environment, the speedy dispatch of a ticket to the correct resolution group is the crucial first step in the problem resolution process. The size and complexity of such environments make the dispatch decision challenging, and incorrect routing by a human dispatcher can lead to significant delays that degrade customer satisfaction, and also have adverse financial implications for both the customer and the IT vendor. In this paper, we present SmartDispatch, a learning-based tool that seeks to automate the process of ticket dispatch while maintaining high accuracy levels. SmartDispatch comes with two classification approaches - the well-known SVM method, and a discriminative term-based approach that we designed to address some of the issues in SVM classification that were empirically observed. Using a combination of these approaches, SmartDispatch is able to automate the dispatch of a ticket to the correct resolution group for a large share of the tickets, while for the rest, it is able to suggest a short list of 3-5 groups that contain the correct resolution group with a high probability. Empirical evaluation of SmartDispatch on data from 3 large service engagement projects in IBM demonstrate the efficacy and practical utility of the approach.

53 citations

Proceedings ArticleDOI
24 Aug 2014
TL;DR: It is demonstrated that it is possible to predict students at-risk of poor assessment performance with a high degree of accuracy, and to do so well in advance, and these insights can be used to pro-actively initiate personalized intervention programs and improve the chances of student success.
Abstract: Poor academic performance in K-12 is often a precursor to unsatisfactory educational outcomes such as dropout, which are associated with significant personal and social costs. Hence, it is important to be able to predict students at risk of poor performance, so that the right personalized intervention plans can be initiated. In this paper, we report on a large-scale study to identify students at risk of not meeting acceptable levels of performance in one state-level and one national standardized assessment in Grade 8 of a major US school district. An important highlight of our study is its scale - both in terms of the number of students included, the number of years and the number of features, which provide a very solid grounding to the research. We report on our experience with handling the scale and complexity of data, and on the relative performance of various machine learning techniques we used for building predictive models. Our results demonstrate that it is possible to predict students at-risk of poor assessment performance with a high degree of accuracy, and to do so well in advance. These insights can be used to pro-actively initiate personalized intervention programs and improve the chances of student success.

49 citations


Cited by
More filters
Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

01 Nov 2011
TL;DR: The Communication program emphasizes theory, research, and application to examine the ways humans communicate, verbally and non-verbally, across a variety of levels and contexts, to understand ourselves, the authors' media, their relationships, their culture and how these things connect.
Abstract: The Communication program emphasizes theory, research, and application to examine the ways humans communicate, verbally and non-verbally, across a variety of levels and contexts. This is particularly important as communication shapes our ideas and values, gives rise to our politics, consumption and socialization, and helps to define our identities and realities. Its power and potential is inestimable. From the briefest of text messages to the grandest of public declarations, we indeed live within communication and invite you to join us in appreciating its increasing importance in contemporary society. From Twitter and reality television to family relationships and workplace dynamics, communication is about understanding ourselves, our media, our relationships, our culture and how these things connect.

822 citations

Proceedings ArticleDOI
23 May 2007
TL;DR: A desired future for global development and the problems that stand in the way of achieving that vision are described and the need for a systematic understanding of what drives the need to coordinate and effective mechanisms for bringing it about is noted.
Abstract: Globally-distributed projects are rapidly becoming the norm for large software systems, even as it becomes clear that global distribution of a project seriously impairs critical coordination mechanisms. In this paper, I describe a desired future for global development and the problems that stand in the way of achieving that vision. I review research and lay out research challenges in four critical areas: software architecture, eliciting and communicating requirements, environments and tools, and orchestrating global development. I conclude by noting the need for a systematic understanding of what drives the need to coordinate and effective mechanisms for bringing it about.

712 citations

Journal Article
TL;DR: The No Child Left Behind Act of 2001 (NCLBA) as mentioned in this paper has become one of the most frequent educational news items in the public press, on the Web, and in professional journals.
Abstract: Ranking as one of the most frequent educational news items in the public press, on the Web, and in professional journals (see sidebar), the No Child Left Behind Act of 2001 (NCLBA) has truly captured national conversation. But one thing is missing from all this talk, and that is any mention of just what it is we're at risk of leaving our children behind. NCLBA, which was signed into law in January 2002 (PL107-110), presumably is a comprehensive effort to improve education for all children in the US by providing them successful schools with qualified teachers in every classroom and fair assessments of learning. Few of us would seriously argue against these goals. Some, however, have questioned the implications of NCLBA (e.g., Lewis, 2002; Linn, Baker, & Betebenner, 2002). In fact, the title itself, "No Child Left Behind," suggests that a strong political agenda accompanies its stated goals. Connotations of "Left Behind" What images come to your mind when you hear the phrase, "left behind"? A skittish racehorse caught in the starting gate? An out-ofbreath traveler narrowly missing a bus or train? A family continuing a road trip only remember they left their youngest child at the rest stop? Now extend these images to the context of education. Does learning have a definitive "starting gate" and "finish line"? Is learning a race? Do some children miss the learning train because they arrived at the station too late or because they're standing on the wrong platform? In many ways, the NCLBA requirements fit these images. The states' adequate yearly progress (AYP) targets may indeed constitute finish lines for learners. Children who learn differently or uniquelyand which of them don't?-may be too late for the learning train, particularly when qualified teachers need only to pass tests of content and not pedagogic knowledge and practice; indeed, under NCLBA, developmental learning becomes an oxymoron. And what about the artistically gifted children who learn best in ways not legitimized by NCLBA's limited definition of scientific research NCLBA? Aren't they waiting on the wrong platform? Add to these images the "one size fits all" perspective of "Reading First," which aligns instruction, materials, and teacher preparation with that narrowly defined scientific research, and the increased testing and accountability requirements across the 50 states. It becomes easier to see why some of my Texas colleagues paraphrase the law as "no child left standing." Under NCLBA, learning is reduced to content that is transmitted and then tested. Period. Independent, engaged, life-long learners are not part of the image. Neither are the truly qualified teachers. Think of those you've known. You would likely describe them as people who nurture independent learning, who share learners' interests, and who are learners themselves. They do not talk of children as being "left behind," except perhaps through a national curriculum and legislated policies that disrespect the interest, control, and power of the learner in the learning process. New Science, New Talk Maybe it's the scientific research base as defined by the NCLBA that needs to be left behind. It is, after all, rooted in 17th Century Newtonian concepts of the universe. New seiences, such as quantum physics, self-organizing systems, and chaos theory, are more useful in understanding the complex systems of the 21st century (Wheatley, 1999). …

444 citations

Proceedings ArticleDOI
23 May 2007
TL;DR: Means to meet challenges to meet the vision of empirical research methods for software engineering include increased competence regarding how to apply and combine alternative empirical methods, tighter links between academia and industry, the development of common research agendas with a focus on empirical Methods, and more resources for empirical research.
Abstract: We present the vision that for all fields of software engineering (SE), empirical research methods should enable the development of scientific knowledge about how useful different SE technologies are for different kinds of actors, performing different kinds of activities, on different kinds of systems. It is part of the vision that such scientific knowledge will guide the development of new SE technology and is a major input to important SE decisions in industry. Major challenges to the pursuit of this vision are: more SE research should be based on the use of empirical methods; the quality, including relevance, of the studies using such methods should be increased; there should be more and better synthesis of empirical evidence; and more theories should be built and tested. Means to meet these challenges include (1) increased competence regarding how to apply and combine alternative empirical methods, (2) tighter links between academia and industry, (3) the development of common research agendas with a focus on empirical methods, and (4) more resources for empirical research.

441 citations