scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 1986"


Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations


Journal ArticleDOI
01 Mar 1986
TL;DR: In this paper, a new architecture for controlling mobile robots is described, which is made up of asynchronous modules that communicate over low-bandwidth channels, each module is an instance of a fairly simple computational machine.
Abstract: A new architecture for controlling mobile robots is described. Layers of control system are built to let the robot operate at increasing levels of competence. Layers are made up of asynchronous modules that communicate over low-bandwidth channels. Each module is an instance of a fairly simple computational machine. Higher-level layers can subsume the roles of lower levels by suppressing their outputs. However, lower levels continue to function as higher levels are added. The result is a robust and flexible robot control system. The system has been used to control a mobile robot wandering around unconstrained laboratory areas and computer machine rooms. Eventually it is intended to control a robot that wanders the office areas of our laboratory, building maps of its surroundings using an onboard arm to perform simple tasks.

7,291 citations


Posted Content
TL;DR: In this article, a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction is described.
Abstract: This paper describes a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction. It also establishes consistency of the estimated covariance matrix under fairly general conditions.

5,822 citations


Book
01 Mar 1986
TL;DR: Robot Vision as discussed by the authors is a broad overview of the field of computer vision, using a consistent notation based on a detailed understanding of the image formation process, which can provide a useful and current reference for professionals working in the fields of machine vision, image processing, and pattern recognition.
Abstract: From the Publisher: This book presents a coherent approach to the fast-moving field of computer vision, using a consistent notation based on a detailed understanding of the image formation process. It covers even the most recent research and will provide a useful and current reference for professionals working in the fields of machine vision, image processing, and pattern recognition. An outgrowth of the author's course at MIT, Robot Vision presents a solid framework for understanding existing work and planning future research. Its coverage includes a great deal of material that is important to engineers applying machine vision methods in the real world. The chapters on binary image processing, for example, help explain and suggest how to improve the many commercial devices now available. And the material on photometric stereo and the extended Gaussian image points the way to what may be the next thrust in commercialization of the results in this area. Chapters in the first part of the book emphasize the development of simple symbolic descriptions from images, while the remaining chapters deal with methods that exploit these descriptions. The final chapter offers a detailed description of how to integrate a vision system into an overall robotics system, in this case one designed to pick parts out of a bin. The many exercises complement and extend the material in the text, and an extensive bibliography will serve as a useful guide to current research. Errata (164k PDF)

3,783 citations


Journal ArticleDOI
16 Oct 1986-Nature
TL;DR: The isolation of a complementary DNA segment that detects a chromosomal segment having the properties of the gene at this locus is described, which is expressed in many tumour types, but no RNA transcript has been found in retinoblastomas and osteosarcomas.
Abstract: The genomes of various tumour cells contain mutant oncogenes that act dominantly, in that their effects can be observed when they are introduced into non-malignant cells. There is evidence for another class of oncogenes, in which tumour-predisposing mutations are recessive to wild-type alleles. Retinoblastoma is a prototype biological model for the study of such recessive oncogenes. This malignant tumour, which arises in the eyes of children, can be explained as the result of two distinct genetic changes, each causing loss of function of one of the two homologous copies at a single genetic locus, Rb, assigned to the q14 band of human chromosome 13. Mutations affecting this locus may be inherited from a parent, may arise during gametogenesis or may occur somatically. Those who inherit a mutant allele at this locus have a high incidence of non-ocular, second tumours, almost half of which are osteosarcomas believed to be caused by the same mutation. Here we describe the isolation of a complementary DNA segment that detects a chromosomal segment having the properties of the gene at this locus. The gene is expressed in many tumour types, but no RNA transcript has been found in retinoblastomas and osteosarcomas. The cDNA fragment detects a locus spanning at least 70 kilobases (kb) in human chromosome band 13q14, all or part of which is frequently deleted in retinoblastomas and osteosarcomas.

2,827 citations


Journal ArticleDOI
TL;DR: In this paper, the temperature dependence of the screening radius, as obtained from lattice QCD, is compared with the J/ψ radius calculated in charmomium models, and the feasibility to detect this effect clearly in the dilepton mass spectrum is examined.

2,416 citations


Journal ArticleDOI
29 Aug 1986-Cell
TL;DR: In this paper, an electrophoretic mobility shift assay with end-labeled DNA fragments was used to characterize proteins that bind to the immunoglobulin (Ig) heavy chain and the kappa light chain enhancers.

2,413 citations


Book
01 Jan 1986
TL;DR: In this article, the authors define an abstract actor machine and provide a minimal programming language for it, which includes higher level constructs such as delayed and eager evaluation, which can be defined in terms of the primitives.
Abstract: : A foundational model of concurrency is developed in this thesis. It examines issues in the design of parallel systems and show why the actor model is suitable for exploiting large-scale parallelism. Concurrency in actors is constrained only by the availability of hardware resources and by the logical dependence inherent in the computation. Unlike dataflow and functional programming, however, actors are dynamically reconfigurable and can model shared resources with changing local state. Concurrency is spawned in actors using asynchronous message-passing, pipelining, and the dynamic creation of actors. The author defines an abstract actor machine and provide a minimal programming language for it. A more expressive language, which includes higher level constructs such as delayed and eager evaluation, can be defined in terms of the primitives. Examples are given to illustrate the ease with which concurrent data and control structures can be programmed. This thesis deals with some central issues in distributed computing. Specifically, problems of divergence and deadlock are addressed. Additional keywords: Object oriented programming; Semantics.

2,207 citations


Journal ArticleDOI
TL;DR: In this paper, a constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented, which is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to computable functions.
Abstract: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented. This generator is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to polynomial-time computable functions ƒr: {1, … , 2k} → {1, … , 2k}. These ƒr's cannot be distinguished from random functions by any probabilistic polynomial-time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions, and complexity theory.

2,043 citations


Journal ArticleDOI
10 Oct 1986-Science
TL;DR: The recognition of an amino-terminal residue in a protein may mediate both the metabolic stability of the protein and the potential for regulation of its stability as predicted by the N-end rule.
Abstract: When a chimeric gene encoding a ubiquitin-beta-galactosidase fusion protein is expressed in the yeast Saccharomyces cerevisiae, ubiquitin is cleaved off the nascent fusion protein, yielding a deubiquitinated beta-galactosidase (beta gal). With one exception, this cleavage takes place regardless of the nature of the amino acid residue of beta gal at the ubiquitin-beta gal junction, thereby making it possible to expose different residues at the amino-termini of the otherwise identical beta gal proteins. The beta gal proteins thus designed have strikingly different half-lives in vivo, from more than 20 hours to less than 3 minutes, depending on the nature of the amino acid at the amino-terminus of beta gal. The set of individual amino acids can thus be ordered with respect to the half-lives that they confer on beta gal when present at its amino-terminus (the "N-end rule"). The currently known amino-terminal residues in long-lived, noncompartmentalized intracellular proteins from both prokaryotes and eukaryotes belong exclusively to the stabilizing class as predicted by the N-end rule. The function of the previously described posttranslational addition of single amino acids to protein amino-termini may also be accounted for by the N-end rule. Thus the recognition of an amino-terminal residue in a protein may mediate both the metabolic stability of the protein and the potential for regulation of its stability.

1,902 citations


Journal ArticleDOI
26 Dec 1986-Cell
TL;DR: Phorbol-ester-mediated induction of NF-kappa B was observed in a T cell line and a nonlymphoid cell line, and is therefore not restricted to B-lymphoids cells, indicating that factors that control transcription of specific genes in specific cells may be activated by posttranslational modification of precursor factors present more widely.

Journal ArticleDOI
28 Mar 1986-Cell
TL;DR: Ced-3 and ced-4 mutants appear grossly normal in morphology and behavior, indicating that programmed cell death is not an essential aspect of nematode development.

Journal ArticleDOI
TL;DR: A model for asynchronous distributed computation is presented and it is shown that natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms retain the desirable convergence properties of their centralized counterparts.
Abstract: We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive interprocessor communications and the communication delays are not too large.

Journal ArticleDOI
TL;DR: It is shown how a variety of errors-in-variables models may be identifiable and estimable in panel data without the use of external instruments and applied to the estimation of ‘labor demand’ relationships, also known as the ‘short-run increasing returns to scale’ puzzle.

Journal ArticleDOI
TL;DR: In this article, the authors explore the relative importance of ambient conditional instability and air-sea latent and sensible heat transfer in both the development and maintenance of tropical cyclones using highly idealized models.
Abstract: Observations and numerical simulators of tropical cyclones show that evaporation from the sea surface is essential to the development of reasonably intense storms. On the other hand, the CISK hypothesis, in the form originally advanced by Charney and Eliassen, holds that initial development results from the organized release of preexisting conditional instability. In this series of papers, we explore the relative importance of ambient conditional instability and air-sea latent and sensible heat transfer in both the development and maintenance of tropical cyclones using highly idealized models. In particular, we advance the hypothesis that the intensification and maintenance of tropical cyclones depend exclusively on self-induced heat transfer from the ocean. In this sense, these storms may be regarded as resulting from a finite amplitude air-sea interaction instability rather than from a linear instability involving ambient potential buoyancy. In the present paper, we attempt to show that reasona...

Journal ArticleDOI
TL;DR: A sinusoidal model for the speech waveform is used to develop a new analysis/synthesis technique that is characterized by the amplitudes, frequencies, and phases of the component sine waves, which forms the basis for new approaches to the problems of speech transformations including time-scale and pitch-scale modification, and midrate speech coding.
Abstract: A sinusoidal model for the speech waveform is used to develop a new analysis/synthesis technique that is characterized by the amplitudes, frequencies, and phases of the component sine waves. These parameters are estimated from the short-time Fourier transform using a simple peak-picking algorithm. Rapid changes in the highly resolved spectral components are tracked using the concept of "birth" and "death" of the underlying sine waves. For a given frequency track a cubic function is used to unwrap and interpolate the phase such that the phase track is maximally smooth. This phase function is applied to a sine-wave generator, which is amplitude modulated and added to the other sine waves to give the final speech output. The resulting synthetic waveform preserves the general waveform shape and is essentially perceptually indistinguishable from the original speech. Furthermore, in the presence of noise the perceptual characteristics of the speech as well as the noise are maintained. In addition, it was found that the representation was sufficiently general that high-quality reproduction was obtained for a larger class of inputs including: two overlapping, superposed speech waveforms; music waveforms; speech in musical backgrounds; and certain marine biologic sounds. Finally, the analysis/synthesis system forms the basis for new approaches to the problems of speech transformations including time-scale and pitch-scale modification, and midrate speech coding [8], [9].

Journal ArticleDOI
TL;DR: For instance, the authors showed that for dog experts sufficiently knowledgeable to identify dogs of the same breed, memory for photographs of dogs of that breed is as disrupted by inversion as is face recognition.
Abstract: Recognition memory for faces is hampered much more by inverted presentation than is memory for any other material so far examined. The present study demonstrates that faces are not unique with regard to this vulnerability to inversion. The experiments also attempt to isolate the source of the inversion effect. In one experiment, use of stimuli (landscapes) in which spatial relations among elements are potentially important distinguishing features is shown not to guarantee a large inversion effect. Two additional experiments show that for dog experts sufficiently knowledgeable to individuate dogs of the same breed, memory for photographs of dogs of that breed is as disrupted by inversion as is face recognition. A final experiment indicates that the effect of orientation on memory for faces does not depend on inability to identify single features of these stimuli upside down. These experiments are consistent with the view that experts represent items in memory in terms of distinguishing features of a different kind than do novices. Speculations as to the type of feature used and neuropsychological and developmental implications of this accomplishment are offered. Perception of human faces is strongly influenced by their orientation. Although inverted photographs of faces remain identifiable, they lose expressive characteristics and become difficult or impossible to categorize in terms of age, mood, and attractiveness. Failure to recognize familiar individuals in photographs viewed upside down is a well-known phenomenon (see, e.g., Arnheim, 1954; Attneave, 1967; Brooks & Goldstein, 1963; Kohler, 1940; Rock, 1974; Yarmey, 1971). Rock argued that because the important distinguishing features of faces are represented in memory with respect to the normal upright, an inverted face must be mentally righted before it can be recognized. He showed that it is difficult to reorient stimuli that have multiple parts, and especially difficult to recognize inverted stimuli in which distinguishing features involve relations among adjacent contours. Faces appear rich in just this sort of distinguishing feature; on these grounds they might be expected to be especially vulnerable to inversion. Thompson's (1980) "Thatcher illusion" provides a striking demonstration that spatial relations among features crucial in the perception of upright faces are not apparent when faces are upside down. In standard recognition memory paradigms, faces presented for inspection upside down and later presented for recognition (still upside down) are much more poorly discriminated from distractors than if the photographs are inspected and then rec

Journal ArticleDOI
TL;DR: A likelihood ratio decision rule is derived and its performance evaluated in both the noise-only and signal-plus-noise cases.
Abstract: A general problem of signal detection in a background of unknown Gaussian noise is addressed, using the techniques of statistical hypothesis testing. Signal presence is sought in one data vector, and another independent set of signal-free data vectors is available which share the unknown covariance matrix of the noise in the former vector. A likelihood ratio decision rule is derived and its performance evaluated in both the noise-only and signal-plus-noise cases.

Proceedings ArticleDOI
01 Nov 1986
TL;DR: By incorporating the dynamic tree data structure of Sleator and Tarjan, a version of the algorithm running in O(nm log(n'/m)) time on an n-vertex, m-edge graph is obtained, as fast as any known method for any graph density and faster on graphs of moderate density.
Abstract: All previously known efftcient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n)) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n'/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efticient distributed and parallel implementations. A parallel implementation running in O(n'log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin

Book
01 Jan 1986
TL;DR: The text has tried to strike a balance between simplicity in exposition and sophistication in analytical reasoning, and ensure that the mathematically oriented reader will find here a smooth development without major gaps.
Abstract: The course is attended by a large number of undergraduate and graduate students with diverse backgrounds. Acccordingly, we have tried to strike a balance between simplicity in exposition and sophistication in analytical reasoning. Some of the more mathematically rigorous analysis has been just sketched or intuitively explained in the text, so that complex proofs do not stand in the way of an otherwise simple exposition. At the same time, some of this analysis and the necessary mathematical results are developed (at the level of advanced calculus) in theoretical problems, which are included at the end of the corresponding chapter. The theoretical problems (marked by *) constitute an important component of the text, and ensure that the mathematically oriented reader will find here a smooth development without major gaps.

Journal ArticleDOI
TL;DR: In this article, the analysis of hierarchical structures does not boil down to a compounding of the basic inefficiency, due to the fact that going from the simple two-tier principal/agent structure to more complex ones introduces the possibility of asymmetric information and insurance motives (or limited liability constraints).
Abstract: This research derives its motivation (and borrows unrestrainedly) from sociological studies of collusive behavior in organizations. Like the sociology literature, it emphasizes that behavior is often best predicted by the analysis of group as well as individual incentives; and it gropes toward a precise definition of concepts such as "power," "cliques," "corporate politics," and "bureaucracy" (Crozier, 1963; Cyert and March; Dalton; Scott). It differs from this literature in that it tries to incorporate the acquired knowledge of modern information economics into the analysis. The research also borrows a considerable amount from the principal/agent paradigm of information economics. This paradigm, mainly developed for two-tier organizations, emphasizes the productive inefficiency associated with asymmetric information and insurance motives (or limited liability constraints).' Formally, organizations can be seen as networks of overlapping or nested principal/agent relationships. A theme of the paper, however, is that the analysis of hierarchical structures does not boil down to a compounding of the basic inefficiency, due to the fact that going from the simple two-tier principal/agent structure to more complex ones introduces the possibility of

Journal ArticleDOI
24 Jan 1986-Science
TL;DR: A model of a blood vessel was constructed in vitro and electron microscopy showed that the endothelial cells lining the lumen and the smooth muscle cells in the wall were healthy and well differentiated.
Abstract: A model of a blood vessel was constructed in vitro. Its multilayered structure resembled that of an artery and it withstood physiological pressures. Electron microscopy showed that the endothelial cells lining the lumen and the smooth muscle cells in the wall were healthy and well differentiated. The lining of endothelial cells functioned physically, as a permeability barrier, and biosynthetically, producing von Willebrand's factor and prostacyclin. The strength of the model depended on its multiple layers of collagen integrated with a Dacron mesh.

Journal ArticleDOI
01 Jan 1986-Nature
TL;DR: A neu complementary DNA clone isolated from a cell line transformed by this oncogene is decribed and suggests strongly that the neu gene encodes the receptor for an as yet unidentified growth factor.
Abstract: The neu oncogene is repeatedly activated in neuro- and glioblastomas derived by transplacental mutagenesis of the BDIX strain of rat with ethylnitrosourea. Foci induced by the DNAs from such tumours on NIH 3T3 cells contain the neu oncogene and an associated phosphoprotein of relative molecular mass 185,000 (p185). Previous work has shown that the neu gene is related to, but distinct from, the gene encoding the EGF receptor (c-erb-B). Here we describe a neu complementary DNA clone isolated from a cell line transformed by this oncogene; the clone has biological activity in a focus-forming assay. The nucleotide sequence of this clone predicts a 1,260-amino-acid transmembrane protein product similar in overall structure to the EGF receptor. We found that 50% of the predicted amino acids of neu and the EGF receptor are identical; greater than 80% of the amino acids in the tyrosine kinase domain are identical. Our results suggest strongly that the neu gene encodes the receptor for an as yet unidentified growth factor.

Journal ArticleDOI
TL;DR: It is shown that a regular bipartite graph is an expanderif and only if the second largest eigenvalue of its adjacency matrix is well separated from the first.
Abstract: Linear expanders have numerous applications to theoretical computer science Here we show that a regular bipartite graph is an expanderif and only if the second largest eigenvalue of its adjacency matrix is well separated from the first This result, which has an analytic analogue for Riemannian manifolds enables one to generate expanders randomly and check efficiently their expanding properties It also supplies an efficient algorithm for approximating the expanding properties of a graph The exact determination of these properties is known to be coNP-complete

Journal ArticleDOI
07 Nov 1986-Cell
TL;DR: The complete nucleotide and primary structure of a full length mdr cDNA capable of conferring a complete multidrug-resistant phenotype is presented and strong homology suggests that a highly conserved functional unit involved in membrane transport is present in the mdr polypeptide.

Journal ArticleDOI
06 Jun 1986-Cell
TL;DR: The construction of in vitro recombinants between the normal and transforming cDNAs has allowed the determination of the mutation responsible for the activation of the neu proto-oncogene.

Journal ArticleDOI
TL;DR: An overview of the X Window System is presented, focusing on the system substrate and the low-level facilities provided to build applications and to manage the desktop.
Abstract: An overview of the X Window System is presented, focusing on the system substrate and the low-level facilities provided to build applications and to manage the desktop The system provides high-performance, high-level, device-independent graphics A hierarchy of resizable, overlapping windows allows a wide variety of application and user interfaces to be built easily Network-transparent access to the display provides an important degree of functional separation, without significantly affecting performance, which is crucial to building applications for a distributed environment To a reasonable extent, desktop management can be custom-tailored to individual environments, without modifying the base system and typically without affecting applications

Journal ArticleDOI
TL;DR: In this paper, the authors consider the edge detection problem as a numerical differentiation problem and show that numerical differentiation of images is an ill-posed problem in the sense of Hadamard.
Abstract: Edge detection is the process that attempts to characterize the intensity changes in the image in terms of the physical processes that have originated them. A critical, intermediate goal of edge detection is the detection and characterization of significant intensity changes. This paper discusses this part of the edge detection problem. To characterize the types of intensity changes derivatives of different types, and possibly different scales, are needed. Thus, we consider this part of edge detection as a problem in numerical differentiation. We show that numerical differentiation of images is an ill-posed problem in the sense of Hadamard. Differentiation needs to be regularized by a regularizing filtering operation before differentiation. This shows that this part of edge detection consists of two steps, a filtering step and a differentiation step. Following this perspective, the paper discusses in detail the following theoretical aspects of edge detection. 1) The properties of different types of filters-with minimal uncertainty, with a bandpass spectrum, and with limited support-are derived. Minimal uncertainty filters optimize a tradeoff between computational efficiency and regularizing properties. 2) Relationships among several 2-D differential operators are established. In particular, we characterize the relation between the Laplacian and the second directional derivative along the gradient. Zero crossings of the Laplacian are not the only features computed in early vision. 3) Geometrical and topological properties of the zero crossings of differential operators are studied in terms of transversality and Morse theory.

Journal ArticleDOI
01 Jan 1986-Nature
TL;DR: It is reported here the identification of a human B-cell nuclear factor (IgNF-A) that binds to DNA sequences in the upstream regions of both the mouse heavy and κ light-chain gene promoters and also to the mouseHeavychain gene enhancer.
Abstract: Trans-acting factors that mediate B-cell specific transcription of immunoglobulin genes have been postulated based on an analysis of the expression of exogenously introduced immunoglobulin gene recombinants in lymphoid and non-lymphoid cells. Two B-cell-specific, cis-acting transcriptional regulatory elements have been identified. One element is located in the intron between the variable (V) and constant (C) regions of both heavy and κ light-chain genes and acts as a transcriptional enhancer1–6. The second element is found upstream of both heavy and κ light-chain gene promoters. This element directs lymphoid-specific transcription even in the presence of viral enhancers7–10. We have sought nuclear factors that might bind specifically to these two regulatory elements by application of a modified gel electrophoresis DNA binding assay11–13. We report here the identification of a human B-cell nuclear factor (IgNF-A) that binds to DNA sequences in the upstream regions of both the mouse heavy and κ light-chain gene promoters and also to the mouse heavy-chain gene enhancer. This sequence-specific binding is probably mediated by a highly conserved sequence motif, ATTTGCAT, present in all three transcriptional elements. Interestingly, a factor showing similar binding specificity to IgNF-A is also present in human HeLa cells.

Proceedings ArticleDOI
01 Nov 1986
TL;DR: The method of proof is extended to investigate the complexity of the word problem for a fixed permutation group and show that polynomial size circuits of width 4 also recognize exactly nonuniform NC 1.
Abstract: We show that any language recognized by an NC 1 circuit (fan-in 2, depth O (log n )) can be recognized by a width-5 polynomial-size branching program. As any bounded-width polynomial-size branching program can be simulated by an NC 1 circuit, we have that the class of languages recognized by such programs is exactly nonuniform NC 1 . Further, following Ruzzo ( J. Comput. System Sci. 22 (1981), 365–383) and Cook ( Inform. and Control 64 (1985) 2–22) , if the branching programs are restricted to be ATIME(logn)-uniform, they recognize the same languages as do ATIME(log n )-uniform NC 1 circuits, that is, those languages in ATIME(log n ). We also extend the method of proof to investigate the complexity of the word problem for a fixed permutation group and show that polynomial size circuits of width 4 also recognize exactly nonuniform NC 1 .