scispace - formally typeset
Search or ask a question

Showing papers by "Florida Polytechnic University published in 2007"


Proceedings ArticleDOI
01 May 2007
TL;DR: This paper introduces a metric that captures the channel conditions and the load of the APs in the network and extends the functionality of the association mechanism in a cross-layer manner taking into account information from the routing layer.
Abstract: In IEEE 802.11-based wireless mesh networks a user is associated with an access point (AP) in order to communicate and be part of the overall network. The association mechanism specified by the IEEE 802.11 standard does not consider the channel conditions and the AP load in the association process. Employing the mechanism in its plain form in wireless mesh networks we may only achieve low throughput and low user transmission rates. In this paper, we propose an association mechanism that is aware of the uplink and downlink channel conditions. We introduce a metric that captures the channel conditions and the load of the APs in the network. The users use this metric in order to optimally associate with the available APs. We then extend the functionality of this mechanism in a cross-layer manner taking into account information from the routing layer. The novelty of the mechanism is that the routing QoS information of the back haul is available to the end users. This information can be combined with the uplink and downlink channel information for the purpose of supporting optimal end-to-end communication and providing high end-to-end throughput values. We evaluate the performance of our system through simulations and we show that 802.11-based mesh networks that use the proposed association mechanism are more capable in meeting the needs of QoS-sensitive applications.

73 citations


Proceedings ArticleDOI
17 Jun 2007
TL;DR: An approach based on version of a dynamic programming framework, called Level Building, to simultaneously segment and match signs to continuous sign language sentences in the presence of movement epenthesis (me).
Abstract: One of the hard problems in automated sign language recognition is the movement epenthesis (me) problem. Movement epenthesis is the gesture movement that bridges two consecutive signs. This effect can be over a long duration and involve variations in hand shape, position, and movement, making it hard to explicitly model these intervening segments. This creates a problem when trying to match individual signs to full sign sentences since for many chunks of the sentence, corresponding to these mes, we do not have models. We present an approach based on version of a dynamic programming framework, called Level Building, to simultaneously segment and match signs to continuous sign language sentences in the presence of movement epenthesis (me). We enhance the classical Level Building framework so that it can accommodate me labels for which we do not have explicit models. This enhanced Level Building algorithm is then coupled with a trigram grammar model to optimally segment and label sign language sentences. We demonstrate the efficiency of the algorithm using a single view video dataset of continuous sign language sentences. We obtain 83% word level recognition rate with the enhanced Level Building approach, as opposed to a 20% recognition rate using a classical Level Building framework on the same dataset. The proposed approach is novel since it does not need explicit models for movement epenthesis.

59 citations


Proceedings ArticleDOI
18 Oct 2007
TL;DR: The benefits of various forms of "Live coding" and test-driven pair programming active learning practices and pedagogical strategies which can be leveraged by instructors to address conceptual difficulties encountered by students in introductory programming courses are discussed.
Abstract: This descriptive study discusses two conceptual difficulties encountered by students in introductory programming courses regardless of the chosen language or pedagogical approach (e.g. objects, classes or fundamentals first). Firstly, students tend to learn programming by memorizing correct code examples instead of acquiring the programming thought process. Secondly, they tend to read code by "flying" over it at a comfortable altitude while thinking to its assumed intent. While relaxing, this practice fails to train students to develop the rigor to catch bugs in others' or their own code. Both trends result in an almost complete loss of intentionality in the programming activity; un-innovative code is generated by analogy with (or cut and paste from) existing solutions and is then almost randomly modified until "it fits" the minimal tests requirements without real analysis of its flaws. We review and evaluate pedagogical strategies which can be leveraged by instructors to address the above mentioned issues. Namely, we discuss the benefits of various forms of "Live coding" and test-driven pair programming active learning practices.

45 citations


Journal ArticleDOI
TL;DR: Shulman et al. as discussed by the authors argue for a fuller and more complex form of practice as praxis, in contrast with Shulman and Golde's implied preference for concrete existing practice as the template for future practice.
Abstract: In the April 2006 issue of Educational Researcher, Shulman, Golde, Bueschel, and Garabedian offered their response to the recent outpouring of criticism calling for reform of doctoral education degrees in the United States. The centerpiece of their proposal was the development of a new practitioner-oriented doctoral degree to replace the Ed.D. This article critiques the conceptual validity of the proposal—especially the idea that existing practice can be the driving force for the proposed curriculum reforms. The author argues for a fuller and more complex form of practice as praxis, in contrast with Shulman et al.’s implied preference for concrete existing practice—what might be called the actuality of practice—as the template for future practice.

30 citations


Journal ArticleDOI
TL;DR: A chronology of the US National Library of Medicine's contribution to access to the world's biomedical literature through its computerization of biomedical indexes, particularly the Medical Literature Analysis and Retrieval System (MEDLARS), is provided.
Abstract: Objective: The research provides a chronology of the US National Library of Medicine's (NLM's) contribution to access to the world's biomedical literature through its computerization of biomedical indexes, particularly the Medical Literature Analysis and Retrieval System (MEDLARS). Method: Using material gathered from NLM's archives and from personal interviews with people associated with developing MEDLARS and its associated systems, the author discusses key events in the history of MEDLARS. Discussion: From the development of the early mechanized bibliographic retrieval systems of the 1940s and to the beginnings of online, interactive computerized bibliographic search systems of the early 1970s chronicled here, NLM's contributions to automation and bibliographic retrieval have been extensive. Conclusion: As NLM's technological experience and expertise grew, innovative bibliographic storage and retrieval systems emerged. NLM's accomplishments regarding MEDLARS were cutting edge, placing the library at the forefront of incorporating mechanization and technologies into medical information systems.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report empirical research that examined the impact of conflict in two different buyer-seller situations, an ongoing relationship and a choice situation where the buyer had to choose between two or more alternative suppliers.
Abstract: Purpose. The purpose of this paper is to report empirical research that examined the impact of conflict in two different buyer-seller situations, an ongoing relationship and a choice situation where the buyer had to choose between two or more alternative suppliers. Conflict was defined as social conflict and has two distinct types, affective and cognitive. Methodology/Approach. The methodology used was two mail surveys to a random sample of purchasing association members who had buying responsibilities in their firms. In one survey respondents were asked to self-select a current buyer-seller relationship they had for a period of at least one year and to indicate the degree of perceived conflict they had with the key supplier representative as well as the amount of relationship loyalty they perceived they had with that supplier. The second survey randomly assigned respondents to evaluate either a supplier whom they gave business to in a choice situation or one they did not, thus establishing as th...

19 citations


Journal ArticleDOI
TL;DR: The Tiered Algorithm employs a global invariant of equality between process production and consumption at each level of process nesting to detect termination, regardless of execution interleaving order and network transit time, for time-efficient and message-efficient detection of process termination.
Abstract: The Tiered Algorithm is presented for time-efficient and message-efficient detection of process termination. It employs a global invariant of equality between process production and consumption at each level of process nesting to detect termination, regardless of execution interleaving order and network transit time. Correctness is validated for arbitrary process launching hierarchies, including launch-in-transit hazards, where processes are created dynamically based on runtime conditions for remote execution. The performance of the Tiered Algorithm is compared to three existing schemes with comparable capabilities, namely, the Chandrasekaran and Venkatesan (CV), Lai, Tseng, and Dong (LTD), and Credit termination detection algorithms. For synchronization of X tasks terminating in E epochs of idle processing, the tiered algorithm is shown to incur O(E) message count complexity and O(T lg T) message bit complexity while incurring detection latency corresponding to only integer addition and comparison. The synchronization performance in terms of message overhead, detection operations, and storage requirements are evaluated and compared across numerous task creation and termination hierarchies.

19 citations


Proceedings ArticleDOI
18 Oct 2007
TL;DR: The achievements of the SOFTICE project in undergraduate networking labs are discussed and the approach to alternative solutions which recently emerged in the CS and IT education communities are compared.
Abstract: Authentic learning in an undergraduate networking course is best achieved when students have privileged access to their workstations and the ability to set up networks of arbitrary size and complexity. With such privileges come great risks for hosting networks. While the initial response to these challenges was to digitally isolate dedicated laboratories, today virtual machines are considered the best practice in setting up such labs.The authors' NSF-sponsored project, "SOFTICE", showed almost 3 years ago that open source virtualization solutions could solve classroom management headaches provoked by early solutions and also that they provide new pedagogical opportunities. Our project illustrates the milestones that adoption of virtualization by IT educational institutions followed, going one step further to sketch out what might be just around the corner. Instead of relying on deploying virtualization suites on each workstation, letting a set of pre-defined virtual machines run constantly on assigned hardware, or managing the transfer of large virtual HD images over networks, we opted from the start for a centralized hosting of VMs on a load-balancing cluster which abstracts the hardware constraints. As enrollment grows, more nodes can be added. As usage of some nodes increase, incoming remote connections are spread on idle hardware automatically.This paper discusses the achievements of the SOFTICE project in undergraduate networking labs and compares our approach to alternative solutions which recently emerged in the CS and IT education communities. More specifically, we discuss the clustering-virtualization technologies synergy and showcase some of the pedagogical benefits by detailing existing laboratories.

14 citations


Journal ArticleDOI
TL;DR: This work discusses the use of User-Mode Linux (UML) and MLN, an application that allows implementation of predefined or student-designed virtual networks of arbitrary complexity, on a low-cost, scalable load-balancing Linux cluster, illustrating how their implementation fulfills necessary and desirable goals for an effective networking lab.
Abstract: The teaching of a practical laboratory component of certain computer science courses such as networking has, in the past, required dedicated laboratories, isolated from the campus networking infrastructure. During the past few years, virtualization has emerged as a practical alternative to this resource-intensive and very limiting solution. User-Mode Linux (UML) is a virtualization technology that offers many advantages; MLN is an application that allows implementation of predefined or student-designed virtual networks of arbitrary complexity. We discuss the use of these tools on a low-cost, scalable load-balancing Linux cluster, illustrating how our implementation fulfills necessary and desirable goals for an effective networking lab.

8 citations


Journal Article
TL;DR: An analysis of when the C language might be most useful in the curriculum, how it should be introduced and what specific topics should be covered in such a re-designed "intermediate programming in C" course is proposed.
Abstract: In December 2006, an anonymous online survey was publicized on the various ACM mailing lists (SIGCSE, SIGITE). Its purpose was to determine the role of the C language in the various modern computing curricula (CS, IT...). This paper summarizes the results and stresses out the quantitative usage of this language in introductory and intermediate programming courses as well as in upper-level undergraduate courses (e.g. operating systems). We also present the qualitative reasons provided by our respondents for, or against, the adoption of the C language in these various courses. We then discuss these results and propose an analysis of when the C language might be most useful in the curriculum, how it should be introduced and what specific topics should be covered in such a re-designed "intermediate programming in C" course.

7 citations


Journal ArticleDOI
TL;DR: The work builds upon previous work and contributes to the hospital marketing literature by examining the relationships between resourcefulness, personality influencers, role stressors, and job tenure.
Abstract: In today's competitive hospital marketing environment, it is imperative that administrators ensure that their hospitals are operating as efficiently and as effectively as possible. "Doing more with less" has become a mandate for hospital administrators and employees. The current research replicates and extends previous work devoted to this topic by examining the job resourcefulness construct in a hospital setting. Job resourcefulness, an individual difference variable, assesses the degree to which employees are able to overcome resource constraints in the pursuit of job-related goals. The work builds upon previous work and contributes to the hospital marketing literature by examining the relationships between resourcefulness, personality influencers, role stressors, and job tenure. Research implications and suggestions for future work in the area are presented.

Proceedings ArticleDOI
01 Aug 2007
TL;DR: The results show that this encoding method makes each bit unique and deterministic, independent of the memory array size.
Abstract: Much progress is being made in the fabrication of molecular devices and nanoscale circuits. Such strides have led to studies and experimental tests using these devices in non-volatile memory arrays. However, the architecture of such arrays makes it difficult to accurately determine the value of each stored bit in the memory. When reading, each bit is effected by the rest of the memory through variable numbers of `stray current pathsiquest. This paper presents the idea of data encoding to thwart the impacts of these stray currents. The results show that this encoding method makes each bit unique and deterministic, independent of the memory array size. Details of the encoding scheme, the hardware design, and layouts are presented throughout this work.

Proceedings ArticleDOI
01 Dec 2007
TL;DR: An algorithmic analysis of GFA is presented as a starting point for applying a mapping methodology in order to explore possible array architectures suitable for its acceleration, and shows that a two-dimensional systolic array with two different types of processors is the best promising architecture for GFA acceleration.
Abstract: A novel video coding scheme has been proposed previously to address the increasing demands on bitrate improvements. This scheme relies on the generalized finite automata (GFA) representation to encode a video sequence. However, the computational workload of the scheme may overwhelm software implementations to the point of hampering the throughput required by the target video applications. This paper presents an algorithmic analysis of GFA as a starting point for applying a mapping methodology in order to explore possible array architectures suitable for its acceleration. The performance of potential candidate architectures are evaluated on a set of appropriate performance parameters. The result of this evaluation shows that a two-dimensional systolic array with two different types of processors is the best promising architecture for GFA acceleration.