scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Clinical Ethics in 2004"


Journal Article
TL;DR: According to physicians’ reports, patients most commonly sought PAS because of decreased quality of life, loss of autonomy and control of bodily functions, and feeling they were a burden to family.
Abstract: Physician-assisted suicide (PAS) became legally available in Oregon in October 1997. The Oregon Death with Dignity Act (ODDA) limits eligibility to adult Oregon residents who are judged by two physicians to have less than six months to live. Patients must be able to make independent decisions and ingest the lethal dose, and there is a 15-day waiting period between request and receipt of a lethal prescription.1 The Oregon Department of Human Services (DHS) compiles and reports statistics annually about those who receive a lethal prescription.2 During the first six years of legalization, 171 persons died after ingesting a lethal prescription according to the requirements of the Act. Compared to the average Oregon decedent, PAS users were younger, better educated, and more likely to be Caucasians or Asians dying of chronic diseases. A majority of the 171 PAS users had health insurance, were enrolled in hospice, and all but one died in community settings. According to physicians’ reports, patients most commonly sought PAS because of decreased quality of life, loss of autonomy and control of bodily functions, and feeling they were a burden to family. The DHS data contribute valuCharacteristics and Proportion of Dying Oregonians Who Personally Consider Physician-Assisted Suicide

128 citations




















Journal Article
TL;DR: The system of local oversight is only partially effective in improving the design of experiments and the consent process in light of "unexpected (adverse) results," and it is shown that the U.S. system is far from perfect in responding when research goes wrong.
Abstract: The view that once prevailed in the U.S.--that research is no more dangerous than the activities of daily life--no longer holds in light of recent experience. Within the past few years, a number of subjects (including normal volunteers) have been seriously injured or killed in research conducted at prestigious institutions. Plainly, when we are talking about research going wrong, we're talking about something very important. We have seen that experiments can go wrong in several ways. Subjects can be injured--physically, mentally, or by having other interests violated. Investigators can commit fraud in data collection or can abuse subjects. And review mechanisms--such as IRBs--don't always work. The two major issues when research goes wrong in any of these ways are, first: What will be done for subjects who have suffered an injury or other wrong? and second: How will future problems be prevented? The present system in the U.S. is better at the second task than the first one. Part of the difficulty in addressing the first lies in knowing what "caused" an apparent injury. Moreover, since until recently the problem of research-related injuries was thought to be a small one, there was considerable resistance to setting up a non-fault compensation system, for fear that it would lead to payment in many cases where such compensation was not deserved. Now, with a further nudge from the NBAC there is renewed interest in developing a formal system to compensate for research injuries. Finally, I have tried to show that our system of local oversight is only partially effective in improving the design of experiments and the consent process in light of "unexpected (adverse) results." As many observers, including the federal General Accounting Office (GAO), have reported, the requirement for "continuing review" of approved research projects is the weak point in the IRB system. The probable solution would be to more strictly apply the requirement that investigators report back any adverse results, de-emphasizing the "screen" introduced by the present language about "unexpected" findings. Yet, despite its weaknesses, there are good aspects to the local basis of our oversight system, and when problems become severe enough, OHRP is likely to evaluate a system and insist on local improvements. Thus, while the U.S. system is far from perfect in responding when research goes wrong, our experience may be useful to others in crafting a system appropriate to their own circumstances. One of the major tasks will be to adequately define what triggers oversight--that is, who reports what to whom and when? The setting of this trigger needs to balance appropriate incentives and penalties. Any system, including our own, will, in my opinion, work much better once an accreditation process is in place, which will offer much more current and detailed information on how each IRB is functioning and what steps are needed to help avoid "experiments going wrong."