scispace - formally typeset
Search or ask a question
Author

Franco Callegati

Bio: Franco Callegati is an academic researcher from University of Bologna. The author has contributed to research in topics: Packet switching & Optical burst switching. The author has an hindex of 25, co-authored 175 publications receiving 3976 citations. Previous affiliations of Franco Callegati include Pompeu Fabra University & University of Adelaide.


Papers
More filters
Journal ArticleDOI
TL;DR: The work carried out under the European ACTS KEOPS (KEys to Optical Packet Switching) project, centering on the definition, development and assessment of optical packet switching and routing networks capable of providing transparency to the payload bit rate, is reviewed in this article.
Abstract: This paper reviews the work carried out under the European ACTS KEOPS (KEys to Optical Packet Switching) project, centering on the definition, development and assessment of optical packet switching and routing networks capable of providing transparency to the payload bit rate. The adopted approach uses optical packets of fixed duration with low bit rate headers to facilitate processing at the network/node interfaces. The paper concentrates on the networking concepts developed in the KEOPS project through a description of the implementation issues pertinent to optical packet switching nodes and network/node interfacing blocks, and consideration of the network functionalities provided within the optical packet layer. The implementation, from necessity, relies on advanced optoelectronic components specifically developed within the project, which are also briefly described.

481 citations

Journal ArticleDOI
TL;DR: This letter proposes a burst assembly mechanism that can reduce the traffic autocorrelation or degree of self-similarity, and at the same time keep the delay due to burst formation limited at the network edges.
Abstract: In this letter we consider burst switching for very high speed routing in the next generation Internet backbone. In this scenario, Internet protocol (IP) packets to a given destination are collected in bursts at the network edges. We propose a burst assembly mechanism that can reduce the traffic autocorrelation or degree of self-similarity, and at the same time keep the delay due to burst formation limited at the network edges.

356 citations

Journal ArticleDOI
TL;DR: The feasibility of the KEOPS concept is assessed by modeling, laboratory experiments, and testbed implementation of optical packet switching nodes and network/node interfacing blocks, including a fully equipped demonstrator.
Abstract: This paper reviews the work carried out in the ACTS KEOPS (Keys to Optical Packet Switching) project, describing the results obtained to date. The main objective of the project is the definition, development, and assessment of optical packet switching and routing networks, capable of providing transparency to the payload bit rate, using optical packets of fixed duration and low bit rate headers in order to enable easier processing at the network/node interfaces. The feasibility of the KEOPS concept is assessed by modeling, laboratory experiments, and testbed implementation of optical packet switching nodes and network/node interfacing blocks, including a fully equipped demonstrator. The demonstration relies on advanced optoelectronic components, developed within the project, which are described.

354 citations

Journal ArticleDOI
01 Jan 2009
TL;DR: Web-based applications rely on the HTTPS protocol to guarantee privacy and security in transactions ranging from home banking, e-commerce, and e-procurement to those that deal with sensitive data such as career and identity information.
Abstract: Web-based applications rely on the HTTPS protocol to guarantee privacy and security in transactions ranging from home banking, e-commerce, and e-procurement to those that deal with sensitive data such as career and identity information. Users trust this protocol to prevent unauthorized viewing of their personal, financial, and confidential information over the Web.

313 citations

Journal ArticleDOI
TL;DR: The overall network concept is presented and the major choices are highlighted and compared with alternative solutions, as well as both the wide-area network and multiple-area networks parts of the network.
Abstract: In this paper, promising technologies and a network architecture are presented for future optical packet switched networks. The overall network concept is presented and the major choices are highlighted and compared with alternative solutions. Both long and shorter term approaches are considered, as well as both the wide-area network and multiple-area networks parts of the network. The results presented in this paper were developed in the frame of the research project DAVID (Data And Voice Integration over DWDM) project, funded by the European Commission through the IST-framework.

231 citations


Cited by
More filters
Journal ArticleDOI
01 May 1975
TL;DR: The Fundamentals of Queueing Theory, Fourth Edition as discussed by the authors provides a comprehensive overview of simple and more advanced queuing models, with a self-contained presentation of key concepts and formulae.
Abstract: Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.

2,562 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

Journal ArticleDOI
TL;DR: In this article, the authors survey the state-of-the-art in NFV and identify promising research directions in this area, and also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.
Abstract: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.

1,634 citations

Journal ArticleDOI
TL;DR: A brief overview of NFV is provided, its requirements and architectural framework are explained, several use cases are presented, and the challenges and future directions in this burgeoning research area are discussed.
Abstract: Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.

1,076 citations

Journal ArticleDOI
TL;DR: The basic concept of OBS is described and a general architecture of optical core routers and electronic edge routers in the OBS network is presented and a nonperiodic time-interval burst assembly mechanism is described.
Abstract: Optical burst switching (OBS) is a promising solution for building terabit optical routers and realizing IP over WDM. In this paper, we describe the basic concept of OBS and present a general architecture of optical core routers and electronic edge routers in the OBS network. The key design issues related to the OBS are also discussed, namely, burst assembly (burstification), channel scheduling, burst offset-time management, and some dimensioning rules. A nonperiodic time-interval burst assembly mechanism is described. A class of data channel scheduling algorithms with void filling is proposed for optical routers using a fiber delay line buffer. The LAUC-VF (latest available unused channel with void filling) channel scheduling algorithm is studied in detail. Initial results on the burst traffic characteristics and on the performance of optical routers in the OBS network with self-similar traffic as inputs are reported in the paper.

961 citations