scispace - formally typeset
Search or ask a question
Author

Daniel Franklin

Other affiliations: University of Wollongong
Bio: Daniel Franklin is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Relay & Imaging phantom. The author has an hindex of 12, co-authored 79 publications receiving 815 citations. Previous affiliations of Daniel Franklin include University of Wollongong.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper explores the design choices made in the development of clustering algorithms targeted at VANETs and presents a taxonomy of the techniques applied to solve the problems of cluster head election, cluster affiliation, and cluster management, and identifies new directions and recent trends in the design of these algorithms.
Abstract: A vehicular ad hoc network (VANET) is a mobile ad hoc network in which network nodes are vehicles—most commonly road vehicles. VANETs present a unique range of challenges and opportunities for routing protocols due to the semi-organized nature of vehicular movements subject to the constraints of road geometry and rules, and the obstacles which limit physical connectivity in urban environments. In particular, the problems of routing protocol reliability and scalability across large urban VANETs are currently the subject of intense research. Clustering can be used to improve routing scalability and reliability in VANETs, as it results in the distributed formation of hierarchical network structures by grouping vehicles together based on correlated spatial distribution and relative velocity. In addition to the benefits to routing, these groups can serve as the foundation for accident or congestion detection, information dissemination and entertainment applications. This paper explores the design choices made in the development of clustering algorithms targeted at VANETs. It presents a taxonomy of the techniques applied to solve the problems of cluster head election, cluster affiliation, and cluster management, and identifies new directions and recent trends in the design of these algorithms. Additionally, methodologies for validating clustering performance are reviewed, and a key shortcoming—the lack of realistic vehicular channel modeling—is identified. The importance of a rigorous and standardized performance evaluation regime utilizing realistic vehicular channel models is demonstrated.

379 citations

Journal ArticleDOI
TL;DR: A method for optimising scintillator thickness to maximise the probability of locating the point of interaction of 511 keV photons in a monolithic scintilator within a specified error bound is proposed and evaluated.
Abstract: High-resolution arrays of discrete monocrystalline scintillators used for gamma photon coincidence detection in PET are costly and complex to fabricate, and exhibit intrinsically non-uniform sensitivity with respect to emission angle. Nanocomposites and transparent ceramics are two alternative classes of scintillator materials which can be formed into large monolithic structures, and which, when coupled to optical photodetector arrays, may offer a pathway to low cost, high-sensitivity, high-resolution PET. However, due to their high optical attenuation and scattering relative to monocrystalline scintillators, these materials exhibit an inherent trade-off between detection sensitivity and the number of scintillation photons which reach the optical photodetectors. In this work, a method for optimising scintillator thickness to maximise the probability of locating the point of interaction of 511 keV photons in a monolithic scintillator within a specified error bound is proposed and evaluated for five nanocomposite materials (LaBr3:Ce-polystyrene, Gd2O3-polyvinyl toluene, LaF3:Ce-polystyrene, LaF3:Ce-oleic acid and YAG:Ce-polystyrene) and four ceramics (GAGG:Ce, GLuGAG:Ce, GYGAG:Ce and LuAG:Pr). LaF3:Ce-polystyrene and GLuGAG:Ce were the best-performing nanocomposite and ceramic materials, respectively, with maximum sensitivities of 48.8% and 67.8% for 5 mm localisation accuracy with scintillator thicknesses of 42.6 mm and 27.5 mm, respectively.

39 citations

Journal ArticleDOI
TL;DR: The prototype of HDR BrachyView demonstrates a satisfactory level of accuracy in its source position estimation, and additional improvements are achievable with further refinement of HDRbrachyView's image processing algorithms.
Abstract: Purpose: This paper presents initial experimental results from a prototype of high dose rate (HDR) BrachyView, a novel in-body source tracking system for HDR brachytherapy based on a multipinhole tungsten collimator and a high resolution pixellated silicon detector array. The probe and its associated position estimation algorithms are validated and a comprehensive evaluation of the accuracy of its position estimation capabilities is presented. Methods: The HDR brachytherapy source is moved through a sequence of positions in a prostate phantom, for various displacements in x, y, and z. For each position, multiple image acquisitions are performed, and source positions are reconstructed. Error estimates in each dimension are calculated at each source position and combined to calculate overall positioning errors. Gafchromic film is used to validate the accuracy of source placement within the phantom. Results: More than 90% of evaluated source positions were estimated with an error of less than one millimeter, with the worst-case error being 1.3 mm. Experimental results were in close agreement with previously published Monte Carlo simulation results. Conclusions: The prototype of HDR BrachyView demonstrates a satisfactory level of accuracy in its source position estimation, and additional improvements are achievable with further refinement of HDR BrachyView’s image processingmore » algorithms.« less

33 citations

Journal ArticleDOI
TL;DR: This paper shows that these three techniques can be used to overcome the problem of dead spots within a body area network and extend the communication range without increasing the transmission power and the antenna size or decreasing receiver sensitivity.
Abstract: munications and near field magnetic induction communication (NFMIC) is discussed. Three multihop relay strategies for NFMIC are proposed: Non Line of Sight Magnetic Induction Relay (NLoS-MI Relay), Non Line of Sight Master/Assistant Magnetic Induction Relay1 (NLoS-MAMI Relay1) and Non Line of Sight Master/Assistant Magnetic Induction Relay2 (NLoSMAMI Relay2). In the first approach only one node contributes to the communication, while in the other two techniques (which are based on a master-assistant strategy), two relaying nodes are employed. This paper shows that these three techniques can be used to overcome the problem of dead spots within a body area network and extend the communication range without increasing the transmission power and the antenna size or decreasing receiver sensitivity. The impact of the separation distance between the nodes on the achievable RSS and channel data rate is evaluated for the three techniques. It is demonstrated that the technique which is most effective depends on the specific network topology. Optimum selection of nodes as relay master and assistant based on the location of the nodes is discussed. The paper also studies the impact of the quality factor on achievable data rate. It is shown that to obtain the highest data rate, the optimum quality factor needs to be determined for each proposed cooperative communication method.

31 citations

Proceedings ArticleDOI
04 Dec 2007
TL;DR: The control separation techniques in the multi-radio multi-channel MAC have been surveyed, and a classification of control separated techniques is provided.
Abstract: The rapid diminishing in the cost of commodity wireless hardware in recent years has prompted the use of multiple radios to improve the capacity of wireless networks. However, the research has shown that the improvement obtained from using multiple radios does not solely depend on the number of radios, but primarily on how these radios can be integrated in a constructive manner. A common way of integration multiple radios is to use a dedicated radio for control. To date, a number of multi-radio MAC protocol are employing a dedicated radio to control and coordinate the other radios, though the approaches are varied from one to another. In this paper, the control separation techniques in the multi-radio multi-channel MAC have been surveyed, and a classification of control separation techniques is provided. Moreover, this study points out the open research issues and intends to spark new interests and developments in this field.

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper is the first to present the state-of-the-art of the SAGIN since existing survey papers focused on either only one single network segment in space or air, or the integration of space-ground, neglecting the Integration of all the three network segments.
Abstract: Space-air-ground integrated network (SAGIN), as an integration of satellite systems, aerial networks, and terrestrial communications, has been becoming an emerging architecture and attracted intensive research interest during the past years. Besides bringing significant benefits for various practical services and applications, SAGIN is also facing many unprecedented challenges due to its specific characteristics, such as heterogeneity, self-organization, and time-variability. Compared to traditional ground or satellite networks, SAGIN is affected by the limited and unbalanced network resources in all three network segments, so that it is difficult to obtain the best performances for traffic delivery. Therefore, the system integration, protocol optimization, resource management, and allocation in SAGIN is of great significance. To the best of our knowledge, we are the first to present the state-of-the-art of the SAGIN since existing survey papers focused on either only one single network segment in space or air, or the integration of space-ground, neglecting the integration of all the three network segments. In light of this, we present in this paper a comprehensive review of recent research works concerning SAGIN from network design and resource allocation to performance analysis and optimization. After discussing several existing network architectures, we also point out some technology challenges and future directions.

661 citations

Book ChapterDOI
27 Jan 2005
TL;DR: This chapter will focus on evaluating the pairwise error probability with and without CSI, and how the results of these evaluations can be used via the transfer bound approach to evaluate average BEP of coded modulation transmitted over the fading channel.
Abstract: In studying the performance of coded communications over memoryless channels (with or without fading), the results are given as upper bounds on the average bit error probability (BEP). In principle, there are three different approaches to arriving at these bounds, all of which employ obtaining the so-called pairwise error probability , or the probability of choosing one symbol sequence over another for a given pair of possible transmitted symbol sequences, followed by a weighted summation over all pairwise events. In this chapter, we will focus on the results obtained from the third approach since these provide the tightest upper bounds on the true performance. The first emphasis will be placed on evaluating the pairwise error probability with and without CSI, following which we shall discuss how the results of these evaluations can be used via the transfer bound approach to evaluate average BEP of coded modulation transmitted over the fading channel.

648 citations

Journal ArticleDOI
TL;DR: A conceptual, generic, and expandable framework for classifying the existing PLS techniques against wireless passive eavesdropping is proposed, and the security techniques that are reviewed are divided into two primary approaches: signal-to-interference-plus-noise ratio- based approach and complexity-based approach.
Abstract: Physical layer security (PLS) has emerged as a new concept and powerful alternative that can complement and may even replace encryption-based approaches, which entail many hurdles and practical problems for future wireless systems. The basic idea of PLS is to exploit the characteristics of the wireless channel and its impairments including noise, fading, interference, dispersion, diversity, etc. in order to ensure the ability of the intended user to successfully perform data decoding while preventing eavesdroppers from doing so. Thus, the main design goal of PLS is to increase the performance difference between the link of the legitimate receiver and that of the eavesdropper by using well-designed transmission schemes. In this survey, we propose a conceptual, generic, and expandable framework for classifying the existing PLS techniques against wireless passive eavesdropping. In this flexible framework, the security techniques that we comprehensively review in this treatise are divided into two primary approaches: signal-to-interference-plus-noise ratio-based approach and complexity-based approach. The first approach is classified into three major categories: first, secrecy channel codes-based schemes; second, security techniques based on channel adaptation; third, schemes based on injecting interfering artificial (noise/jamming) signals along with the transmitted information signals. The second approach (complexity-based), which is associated with the mechanisms of extracting secret sequences from the shared channel, is classified into two main categories based on which layer the secret sequence obtained by channel quantization is applied on. The techniques belonging to each one of these categories are divided and classified into three main signal domains: time, frequency and space. For each one of these domains, several examples are given and illustrated along with the review of the state-of-the-art security advances in each domain. Moreover, the advantages and disadvantages of each approach alongside the lessons learned from existing research works are stated and discussed. The recent applications of PLS techniques to different emerging communication systems such as visible light communication, body area network, power line communication, Internet of Things, smart grid, mm-Wave, cognitive radio, vehicular ad-hoc network, unmanned aerial vehicle, ultra-wideband, device-to-device, radio-frequency identification, index modulation, and 5G non-orthogonal multiple access based-systems, are also reviewed and discussed. The paper is concluded with recommendations and future research directions for designing robust, efficient and strong security methods for current and future wireless systems.

457 citations

Journal ArticleDOI
TL;DR: This paper explores the design choices made in the development of clustering algorithms targeted at VANETs and presents a taxonomy of the techniques applied to solve the problems of cluster head election, cluster affiliation, and cluster management, and identifies new directions and recent trends in the design of these algorithms.
Abstract: A vehicular ad hoc network (VANET) is a mobile ad hoc network in which network nodes are vehicles—most commonly road vehicles. VANETs present a unique range of challenges and opportunities for routing protocols due to the semi-organized nature of vehicular movements subject to the constraints of road geometry and rules, and the obstacles which limit physical connectivity in urban environments. In particular, the problems of routing protocol reliability and scalability across large urban VANETs are currently the subject of intense research. Clustering can be used to improve routing scalability and reliability in VANETs, as it results in the distributed formation of hierarchical network structures by grouping vehicles together based on correlated spatial distribution and relative velocity. In addition to the benefits to routing, these groups can serve as the foundation for accident or congestion detection, information dissemination and entertainment applications. This paper explores the design choices made in the development of clustering algorithms targeted at VANETs. It presents a taxonomy of the techniques applied to solve the problems of cluster head election, cluster affiliation, and cluster management, and identifies new directions and recent trends in the design of these algorithms. Additionally, methodologies for validating clustering performance are reviewed, and a key shortcoming—the lack of realistic vehicular channel modeling—is identified. The importance of a rigorous and standardized performance evaluation regime utilizing realistic vehicular channel models is demonstrated.

379 citations

Journal ArticleDOI
TL;DR: This paper surveys the networking and communication technologies in autonomous driving from two aspects: intra- and inter-vehicle.
Abstract: The development of light detection and ranging, Radar, camera, and other advanced sensor technologies inaugurated a new era in autonomous driving. However, due to the intrinsic limitations of these sensors, autonomous vehicles are prone to making erroneous decisions and causing serious disasters. At this point, networking and communication technologies can greatly make up for sensor deficiencies, and are more reliable, feasible and efficient to promote the information interaction, thereby improving autonomous vehicle’s perception and planning capabilities as well as realizing better vehicle control. This paper surveys the networking and communication technologies in autonomous driving from two aspects: intra- and inter-vehicle. The intra-vehicle network as the basis of realizing autonomous driving connects the on-board electronic parts. The inter-vehicle network is the medium for interaction between vehicles and outside information. In addition, we present the new trends of communication technologies in autonomous driving, as well as investigate the current mainstream verification methods and emphasize the challenges and open issues of networking and communications in autonomous driving.

335 citations