scispace - formally typeset
Search or ask a question
Author

Peter D. Sozou

Bio: Peter D. Sozou is an academic researcher from London School of Economics and Political Science. The author has contributed to research in topics: Point distribution model & Polynomial regression. The author has an hindex of 14, co-authored 31 publications receiving 914 citations. Previous affiliations of Peter D. Sozou include University of Liverpool & University College London.

Papers
More filters
Journal ArticleDOI
TL;DR: The time–preference function predicted by this analysis can be calculated by means of either a direct superposition method, or a Bayesian updating of the expected hazard rate.
Abstract: The value of a future reward should be discounted where there is a risk that the reward will not be realized. If the risk manifests itself at a known, constant hazard rate, a riskneutral recipient ...

255 citations

Journal ArticleDOI
TL;DR: A stochastic network model of cell senescence is described in which a primary role is played by telomere reduction but in which other mechanisms (oxidative stress linked particularly to mitochondrial damage, and nuclear somatic mutations) also contribute.

112 citations

Journal ArticleDOI
TL;DR: This study brings together life–history theory and time–preference theory within a single modelling framework and considers an animal encountering reproductive opportunities as a random process, finding that middle–aged animals may be the most long term in their outlook.
Abstract: Discounting occurs when an immediate benefit is systematically valued more highly than a delayed benefit of the same magnitude. It is manifested in physiological and behavioural strategies of organisms. This study brings together life-history theory and time-preference theory within a single modelling framework. We consider an animal encountering reproductive opportunities as a random process. Under an external hazard, optimal life-history strategy typically prioritizes immediate reproduction at the cost of declining fertility and increasing mortality with age. Given such ageing, an immediate reproductive reward should be preferred to a delayed reward because of both the risk of death and declining fertility. By this analysis, ageing is both a consequence of discounting by the body and a cause of behavioural discounting. A series of models is developed, making different assumptions about external hazards and biological ageing. With realistic ageing assumptions (increasing mortality and an accelerating rate of fertility decline) the time-preference rate increases in old age. Under an uncertain external hazard rate, young adults should also have relatively high time-preference rates because their (Bayesian) estimate of the external hazard is high. Middle-aged animals may therefore be the most long term in their outlook.

105 citations

Proceedings ArticleDOI
01 Jul 1995
TL;DR: This work presents a new form of PDM, which uses a multi-layer perceptron to carry out non-linear principal component analysis and is the most general formulation for PDMs which has been proposed to date.
Abstract: Objects of the same class sometimes exhibit variation in shape. This shape variation has previously been modelled by means of point distribution models (PDMs) in which there is a linear relationship between a set of shape parameters and the positions of points on the shape. A polynomial regression generalization of PDMs, which succeeds in capturing certain forms of non-linear shape variability, has also been described. Here we present a new form of PDM, which uses a multi-layer perceptron to carry out non-linear principal component analysis. We compare the performance of the new model with that of the existing models on two classes of variable shape: one exhibits bending, and the other exhibits complete rotation. The linear PDM fails on both classes of shape; the polynomial regression model succeeds for the first class of shapes but fails for the second; the new multi-layer perceptron model performs well for both classes of shape. The new model is the most general formulation for PDMs which has been proposed to date.

54 citations

Journal ArticleDOI
TL;DR: This work presents a new form of PDM, which uses a multi-layer perceptron to carry out non-linear principal component analysis and is the most general formulation for PDMs which has been proposed to date.

53 citations


Cited by
More filters
Journal Article
TL;DR: For the next few weeks the course is going to be exploring a field that’s actually older than classical population genetics, although the approach it’ll be taking to it involves the use of population genetic machinery.
Abstract: So far in this course we have dealt entirely with the evolution of characters that are controlled by simple Mendelian inheritance at a single locus. There are notes on the course website about gametic disequilibrium and how allele frequencies change at two loci simultaneously, but we didn’t discuss them. In every example we’ve considered we’ve imagined that we could understand something about evolution by examining the evolution of a single gene. That’s the domain of classical population genetics. For the next few weeks we’re going to be exploring a field that’s actually older than classical population genetics, although the approach we’ll be taking to it involves the use of population genetic machinery. If you know a little about the history of evolutionary biology, you may know that after the rediscovery of Mendel’s work in 1900 there was a heated debate between the “biometricians” (e.g., Galton and Pearson) and the “Mendelians” (e.g., de Vries, Correns, Bateson, and Morgan). Biometricians asserted that the really important variation in evolution didn’t follow Mendelian rules. Height, weight, skin color, and similar traits seemed to

9,847 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

Journal ArticleDOI
TL;DR: Statistical shape models (SSMs) have by now been firmly established as a robust tool for segmentation of medical images as discussed by the authors, primarily made possible by breakthroughs in automatic detection of shape correspondences.

1,402 citations

Book ChapterDOI
TL;DR: This chapter reviews the present empirical and theoretical work on antipredatory decision making and suggests that attention is needed for further work on the effects that predator and prey have on the other's behavioral decisions.
Abstract: Publisher Summary This chapter reviews the present empirical and theoretical work on antipredator decision making. The ways in which predators influence the behavioral decisions made by their prey is now the subject of a large and growing literature. Some notable recent advances include clear demonstrations that antipredatory decision making (1) may influence many aspects of reproductive behavior, (2) has demonstrable long-term consequences for individual fitness, and (3) may influence the nature of ecological systems themselves. There have also been many advances in the theory of antipredator behavior, which should provide a sound conceptual basis for further progress. Attention is needed for further work on the effects that predator and prey have on the other's behavioral decisions. The range of reproductive behaviors influenced by the risk of predation also requires much more investigation. Work on the long-term costs of antipredator decision making needs more empirical documentation and greater taxonomic diversity. Work on the ecological implications of antipredatory decision making has only scratched the surface, especially with regard to population-level effects and species interactions. Theoretical investigations should also play a prominent role in future work.

1,230 citations

Journal ArticleDOI
24 Oct 2002-Nature
TL;DR: Strong evidence is found that stochastic as well as genetic factors are significant in C. elegans ageing, with extensive variability both among same-age animals and between cells of the same type within individuals.
Abstract: The nematode Caenorhabditis elegans is an important model for studying the genetics of ageing, with over 50 life-extension mutations known so far. However, little is known about the pathobiology of ageing in this species, limiting attempts to connect genotype with senescent phenotype. Using ultrastructural analysis and visualization of specific cell types with green fluorescent protein, we examined cell integrity in different tissues as the animal ages. We report remarkable preservation of the nervous system, even in advanced old age, in contrast to a gradual, progressive deterioration of muscle, resembling human sarcopenia. The age-1(hx546) mutation, which extends lifespan by 60-100%, delayed some, but not all, cellular biomarkers of ageing. Strikingly, we found strong evidence that stochastic as well as genetic factors are significant in C. elegans ageing, with extensive variability both among same-age animals and between cells of the same type within individuals.

1,077 citations