Bio: Antony Waldock is an academic researcher from BAE Systems. The author has contributed to research in topics: Supervised learning & Computational intelligence. The author has an hindex of 8, co-authored 22 publications receiving 159 citations.
TL;DR: This study focuses on the generation and evaluation of a trajectory to perform a perched landing on the ground using a non-linear constraint optimiser (Interior Point OPTimizer) and a Deep Q-Network (DQN).
Abstract: A UAV with a variable sweep wing has the potential to perform a perched landing on the ground by achieving high pitch rates to take advantage of dynamic stall. This study focuses on the generation and evaluation of a trajectory to perform a perched landing on the ground using a non-linear constraint optimiser (Interior Point OPTimizer) and a Deep Q-Network (DQN). The trajectory is generated using a numerical model that characterises the dynamics of a UAV with a variable sweep wing which was developed through wind tunnel testing. The trajectories generated by a DQN have been compared with those produced by non-linear constraint optimisation in simulation and flown on the UAV to evaluate performance. The results show that a DQN generates trajectories with a lower cost function and have the potential to generate trajectories from a range of starting conditions (on average generating a trajectory takes 174 milliseconds). The trajectories generated performed a rapid pitch up before the landing site is reached, to reduce the airspeed (on average less than 0.5m/s just above the landing site) without generating an increase in altitude, and then the nose dropped just before hitting the ground to allow the aircraft to be recovered without damaging the tail. The trajectories generated by a DQN produced a final airspeed (when it hit the ground) of 3.25m/s (with a standard deviation of 0.97m/s) in the downward direction, which would allow the aircraft to be safely recovered and significantly less than a traditional landing (∼ 10m/s).
TL;DR: In this paper, a variable sweep wing UAV is developed using off-the-shelf components with a custom mechanism for the wing box, which allows up to 30 ∘ forward sweep for significant pitching moments during the flare.
Abstract: A variable sweep wing UAV is developed utilising off the shelf components with a custom mechanism for the wing box The movement of the wing sweep in flight enables large pitching moments suitable for performing perching manoeuvres Wind tunnel data is presented that confirms the favourable characteristics expected from sweeping the wing and achieving high pitch rates Whilst only small sweep changes are required during flight, the design allows up to 30 ∘ forward sweep for significant pitching moments during the flare A new collection of controllers is developed based on observations from similar landing techniques performed by birds and hang-gliders onto flat ground The three-stage landing process takes the aircraft along an approach path, through a roundout procedure during which airspeed decays and concludes with rapid pitch up Flight test results are presented during which it is found that the airspeed can be reduced to, on average, under 3 m / s in the final moments before landing – well below the stall speed of 9 m / s
31 Jul 2015
TL;DR: In this paper, a set of multi-UAV supervisory control interfaces and a multi-agent coordination algorithm are developed to support human decision making in a setting where a team of humans oversee the coordination of multiple UAVs.
Abstract: We consider a setting where a team of humans oversee the coordination of multiple Unmanned Aerial Vehicles (UAVs) to perform a number of search tasks in dynamic environments that may cause the UAVs to drop out. Hence, we develop a set of multi-UAV supervisory control interfaces and a multi-agent coordination algorithm to support human decision making in this setting. To elucidate the resulting interactional issues, we compare manual and mixed-initiative task allocation in both static and dynamic environments in lab studies with 40 participants and observe that our mixed initiative system results in lower workloads and better performance in re-planning tasks than one which only involves manual task allocation. Our analysis points to new insights into the way humans appropriate flexible autonomy.
••12 Jul 2011
TL;DR: The MOEA/D and SMPSO algorithms are shown to outperform the other multi-objective optimisation algorithms for this type of problem and have the advantage of returning a set of routes that represent the trade-off between objectives.
Abstract: This paper presents an evaluation of the benefits of multi-objective optimisation algorithms, compared to single objective optimisation algorithms, when applied to the problem of planning a route over an unstructured environment, where a route has a number of objectives defined using real-world data sources. The paper firstly introduces the problem of planning a route over an unstructured environment (one where no pre-determined set of possible routes exists) and identifies the data sources, Digital Terrain Elevation Data (DTED) and NASA Landsat Hyperspectral data, used to calculate the route objectives (time taken, exposure and fuel consumed). A number of different route planning problems are then used to compare the performance of two single-objective optimisation algorithms and a range of multi-objective optimisation algorithms selected from the literature.The experimental results show that the multi-objective optimisation algorithms result in significantly better routes than the single-objective optimisation algorithms and have the advantage of returning a set of routes that represent the trade-off between objectives. The MOEA/D and SMPSO algorithms are shown, in these experiments, to outperform the other multi-objective optimisation algorithms for this type of problem. Future work will focus on how these algorithms can be integrated into a route planning tool and especially on reducing the time taken to produce routes.
••01 Jun 2008
TL;DR: A hierarchical fuzzy rule based system is used to improve the generalisation of the control policy through iterative refinement of an initial coarse representation on a classical RL problem called the mountain car problem.
Abstract: Reinforcement learning (RL) is learning how to map states to actions so as to maximise a numeric reward signal. Fuzzy Q-learning (FQL) extends the RL technique Q-learning to large or continuous problems and has been applied to a wide range of applications from data mining to robot control. Typically, FQL uses a uniform or pre-defined internal representation provided by the human designer. A uniform representation usually provides poor generalisation for control applications, and a pre-defined representation requires the designer to have an in-depth knowledge of the desired control policy. In this paper, the approach taken is to reduce the reliance on a human designer by adapting the internal representation, to improve the generalisation over the control policy, during the learning process. A hierarchical fuzzy rule based system (HFRBS) is used to improve the generalisation of the control policy through iterative refinement of an initial coarse representation on a classical RL problem called the mountain car problem. The process of adapting the representation is shown to significantly reduce the time taken to learn a suitable control policy.
TL;DR: This paper surveys the development ofMOEAs primarily during the last eight years and covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEas, coevolutionary MOE As, selection and offspring reproduction operators, MOE as with specific search methods, MOeAs for multimodal problems, constraint handling and MOE
Abstract: A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
••29 Apr 2010
TL;DR: Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP, with a focus on continuous-variable problems.
Abstract: From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.
TL;DR: A comprehensive survey on the UAVs and the related issues will be introduced, the envisioned UAV-based architecture for the delivery of Uav-based value-added IoT services from the sky will be introduction, and the relevant key challenges and requirements will be presented.
Abstract: Recently, unmanned aerial vehicles (UAVs), or drones, have attracted a lot of attention, since they represent a new potential market. Along with the maturity of the technology and relevant regulations, a worldwide deployment of these UAVs is expected. Thanks to the high mobility of drones, they can be used to provide a lot of applications, such as service delivery, pollution mitigation, farming, and in the rescue operations. Due to its ubiquitous usability, the UAV will play an important role in the Internet of Things (IoT) vision, and it may become the main key enabler of this vision. While these UAVs would be deployed for specific objectives (e.g., service delivery), they can be, at the same time, used to offer new IoT value-added services when they are equipped with suitable and remotely controllable machine type communications (MTCs) devices (i.e., sensors, cameras, and actuators). However, deploying UAVs for the envisioned purposes cannot be done before overcoming the relevant challenging issues. These challenges comprise not only technical issues, such as physical collision, but also regulation issues as this nascent technology could be associated with problems like breaking the privacy of people or even use it for illegal operations like drug smuggling. Providing the communication to UAVs is another challenging issue facing the deployment of this technology. In this paper, a comprehensive survey on the UAVs and the related issues will be introduced. In addition, our envisioned UAV-based architecture for the delivery of UAV-based value-added IoT services from the sky will be introduced, and the relevant key challenges and requirements will be presented.
TL;DR: Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.
Abstract: Recently, MOEA/D multi-objective evolutionary algorithm based on decomposition has achieved great success in the field of evolutionary multi-objective optimization and has attracted a lot of attention. It decomposes a multi-objective optimization problem MOP into a set of scalar subproblems using uniformly distributed aggregation weight vectors and provides an excellent general algorithmic framework of evolutionary multi-objective optimization. Generally, the uniformity of weight vectors in MOEA/D can ensure the diversity of the Pareto optimal solutions, however, it cannot work as well when the target MOP has a complex Pareto front PF; i.e., discontinuous PF or PF with sharp peak or low tail. To remedy this, we propose an improved MOEA/D with adaptive weight vector adjustment MOEA/D-AWA. According to the analysis of the geometric relationship between the weight vectors and the optimal solutions under the Chebyshev decomposition scheme, a new weight vector initialization method and an adaptive weight vector adjustment strategy are introduced in MOEA/D-AWA. The weights are adjusted periodically so that the weights of subproblems can be redistributed adaptively to obtain better uniformity of solutions. Meanwhile, computing efforts devoted to subproblems with duplicate optimal solution can be saved. Moreover, an external elite population is introduced to help adding new subproblems into real sparse regions rather than pseudo sparse regions of the complex PF, that is, discontinuous regions of the PF. MOEA/D-AWA has been compared with four state of the art MOEAs, namely the original MOEA/D, Adaptive-MOEA/D, -MOEA/D, and NSGA-II on 10 widely used test problems, two newly constructed complex problems, and two many-objective problems. Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.
TL;DR: A comprehensive survey of the decomposition-based MOEAs proposed in the last decade is presented, including development of novel weight vector generation methods, use of new decomposition approaches, efficient allocation of computational resources, modifications in the reproduction operation, mating selection and replacement mechanism, hybridizing decompositions- and dominance-based approaches, etc.
Abstract: Decomposition is a well-known strategy in traditional multiobjective optimization. However, the decomposition strategy was not widely employed in evolutionary multiobjective optimization until Zhang and Li proposed multiobjective evolutionary algorithm based on decomposition (MOEA/D) in 2007. MOEA/D proposed by Zhang and Li decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them in a collaborative manner using an evolutionary algorithm (EA). Each subproblem is optimized by utilizing the information mainly from its several neighboring subproblems. Since the proposition of MOEA/D in 2007, decomposition-based MOEAs have attracted significant attention from the researchers. Investigations have been undertaken in several directions, including development of novel weight vector generation methods, use of new decomposition approaches, efficient allocation of computational resources, modifications in the reproduction operation, mating selection and replacement mechanism, hybridizing decomposition- and dominance-based approaches, etc. Furthermore, several attempts have been made at extending the decomposition-based framework to constrained multiobjective optimization, many-objective optimization, and incorporate the preference of decision makers. Additionally, there have been many attempts at application of decomposition-based MOEAs to solve complex real-world optimization problems. This paper presents a comprehensive survey of the decomposition-based MOEAs proposed in the last decade.