Principles of Artificial Intelligence
Citations
14,635 citations
Cites methods from "Principles of Artificial Intelligen..."
...Learning hierarchical representations through deep SL, UL, RL Many methods of Good Old-Fashioned Artificial Intelligence (GOFAI) (Nilsson, 1980) as well as more recent approaches to AI (Russell, Norvig, Canny, Malik, & Edwards, 1995) and Machine Learning (Mitchell, 1997) learn hierarchies of more and more abstract data representations....
[...]
...Many methods of Good Old-Fashioned Artificial Intelligence (GOFAI) (Nilsson, 1980) as well as more recent approaches to AI (Russell, Norvig, Canny, Malik, & Edwards, 1995) and Machine Learning (Mitchell, 1997) learn hierarchies of more and more abstract data representations....
[...]
...Unlike traditional methods for automatic sequential program synthesis (e.g., Balzer, 1985; Deville & Lau, 1994; Soloway, Abbreviations in alphabetical order AE: Autoencoder AI: Artificial Intelligence ANN: Artificial Neural Network BFGS: Broyden–Fletcher–Goldfarb–Shanno BNN: Biological Neural Network BM: Boltzmann Machine BP: Backpropagation BRNN: Bi-directional Recurrent Neural Network CAP: Credit Assignment Path CEC: Constant Error Carousel CFL: Context Free Language CMA-ES: Covariance Matrix Estimation ES CNN: Convolutional Neural Network CoSyNE: Co-Synaptic Neuro-Evolution CSL: Context Sensitive Language CTC: Connectionist Temporal Classification DBN: Deep Belief Network DCT: Discrete Cosine Transform DL: Deep Learning DP: Dynamic Programming DS: Direct Policy Search EA: Evolutionary Algorithm EM: Expectation Maximization ES: Evolution Strategy FMS: Flat Minimum Search FNN: Feedforward Neural Network FSA: Finite State Automaton GMDH: Group Method of Data Handling GOFAI: Good Old-Fashioned AI GP: Genetic Programming GPU: Graphics Processing Unit GPU-MPCNN: GPU-Based MPCNN HMM: Hidden Markov Model HRL: Hierarchical Reinforcement Learning HTM: Hierarchical Temporal Memory HMAX: Hierarchical Model ‘‘and X’’ LSTM: Long Short-Term Memory (RNN) MDL: Minimum Description Length MDP: Markov Decision Process MNIST: Mixed National Institute of Standards and Technol- ogy Database MP: Max-Pooling MPCNN: Max-Pooling CNN NE: NeuroEvolution NEAT: NE of Augmenting Topologies NES: Natural Evolution Strategies NFQ: Neural Fitted Q-Learning NN: Neural Network OCR: Optical Character Recognition PCC: Potential Causal Connection PDCC: Potential Direct Causal Connection PM: Predictability Minimization POMDP: Partially Observable MDP RAAM: Recursive Auto-Associative Memory RBM: Restricted Boltzmann Machine ReLU: Rectified Linear Unit RL: Reinforcement Learning RNN: Recurrent Neural Network R-prop: Resilient Backpropagation SL: Supervised Learning SLIM NN: Self-Delimiting Neural Network SOTA: Self-Organizing Tree Algorithm SVM: Support Vector Machine TDNN: Time-Delay Neural Network TIMIT: TI/SRI/MIT Acoustic-Phonetic Continuous Speech Corpus UL: Unsupervised Learning WTA: Winner-Take-All 1986; Waldinger & Lee, 1969), RNNs can learn programs that mix sequential and parallel information processing in a natural and efficient way, exploiting the massive parallelism viewed as crucial for sustaining the rapid decline of computation cost observed over the past 75 years....
[...]
...Learning hierarchical representations through deep SL, UL, RL Many methods of Good Old-Fashioned Artificial Intelligence (GOFAI) (Nilsson, 1980) as well as more recent approaches to AI (Russell, Norvig, Canny, Malik, & Edwards, 1995) and Machine Learning (Mitchell, 1997) learn hierarchies of more…...
[...]
13,487 citations
7,930 citations
Cites background from "Principles of Artificial Intelligen..."
...This latter strategy works in situations where the goodness of alternative actions is determined by estimates which are always overly optimistic and which become more realistic with continued experience, as occurs for example in A* search (Nilsson, 1980)....
[...]
7,877 citations
[...]
6,373 citations
References
7,642 citations
"Principles of Artificial Intelligen..." refers methods in this paper
...The elegant way of modeling a computer by a Turing machine leads us to computational complexity theory. Computational complexity theory addresses the questions of which problems can be solved in a finite amount of time on a computer. Time is the most important resource during computation besides space and energy. Space and energy are negligible when using the Turing machine because the Turing machine itself is composed of infinitely long tape and does not require any energy resources [Lewis and Papadimitriou (1981)]....
[...]
...ing created a simple model called the Turing machine [Turing (1936)] (see Figure 2....
[...]
...The formal definition for easy problems represented by P is as follows: The set of all decision problems that have instances that are solvable in polynomial time using a deterministic Turing machine. In a deterministic Turing machine, all of the transitions are described by some fixed rules [Lewis and Papadimitriou (1981)]....
[...]
7,251 citations
"Principles of Artificial Intelligen..." refers methods in this paper
...This function corresponds to the simplified normalized contrast model of Tversky [Tversky (1977)] Sim(Ca,B) = α|Ca ∩B| − β|Ca−B| (6....
[...]
...The result is related to categorial representation based the contrast model of Tversky [Tversky (1977)]....
[...]
2,091 citations
1,623 citations
1,500 citations