scispace - formally typeset
Search or ask a question

What is historical overview on neural networks? 


Best insight from top research papers

Neural networks have a historical evolution since their inception in the 1950s. The first conceptual model of an artificial neural network (ANN) was developed by Warren S. McCulloch and Walter Pitts in 1953 . However, the relevant learning method, known as backpropagation, was discovered independently by three groups of researchers in the 1980s . Neural networks are computational approaches inspired by the structure of the brain and consist of interconnected processing elements . They have the ability to adapt to changing input and can learn from available training patterns or examples . Neural networks have been applied to various real-world problems, such as speech recognition, computer vision, and industrial inspection . They continue to achieve impressive results in machine learning, but reasoning about abstract concepts remains a challenge .

Answers from top 4 papers

More filters
Papers (4)Insight
OtherDOI
28 Sep 2022
The paper provides a historical overview of neural networks, stating that they have been developed for more than 60 years since the model was proposed in the 1950s.
The paper provides a brief historical overview of neural networks, highlighting their emergence as an area of research and application in the past few years.
The paper provides a historical overview of artificial neural networks, starting from their inception and the discovery of the backpropagation algorithm, to the recent advancements in convolutional neural networks and deep learning.
The paper provides a historical overview of neural networks, mentioning that the first conceptual model of an artificial neural network was developed in 1953 by Warren S. McCulloch and Walter Pitts.

Related Questions

What is neural networks?5 answersNeural networks are computational systems inspired by nerve tissue, with applications in various fields like AI, machine learning, and neuroscience. They model systems of neurons, either organic or artificial, and are studied using graph theory to analyze their structural properties and topological characteristics. Physics-informed neural networks approximate the state of dynamical systems under random environments, solving differential equations to characterize system states and behaviors. These models have gained renewed interest as tools for understanding real neural systems, bridging cognitive hypotheses with neurophysiological measurements in cognitive neuroscience. Neural networks are also utilized in solving mathematical programming problems, offering solutions to complex optimization tasks and presenting new research avenues in this area. The success of neural networks in approximating functions from data observations is attributed to their unique approximation properties and computational efficiency, despite challenges in stability and parameter optimization.
History of Artificial Intelligence?5 answersArtificial Intelligence (AI) has been a topic of interest since ancient times, with philosophers such as Aristotle, Aquinas, Descartes, and Leibniz pondering questions about cognitive operations and the requirements for a precise and unambiguous language. The history of modern AI spans from 1940 to 2021, encompassing milestones in hardware, software, programming languages, concepts, and technologies. The emergence of AI was driven by the United Kingdom and the United States, with Turing's publication and Asimov's science fiction playing significant roles. The formalization of reasoning and learning mechanisms, as well as the design of autonomous machines, have been central to the development of AI. Early milestones in AI include work in problem-solving, learning, knowledge representation, and inference, as well as the development of language understanding and knowledge-based systems. The field of AI continues to face current issues and is influenced by various organizations.
What are the key milestones in the history of neural networks?5 answersThe key milestones in the history of neural networks include the development of the first relevant learning method by Paul Werbos in 1986, which was discovered simultaneously by three groups of independent researchers: LeCun (1986), Parker (1985), and Rumelhart, Hinton, and Williams (1986). The history of neural networks is divided into four stages: the Beginning of neural networks, the First golden age, the Quiet Years, and the Renewed enthusiasm, which show the interplay among biological experimentation, modeling, computer simulation, and hardware implementation. Another milestone is the invention of the first true learning rule for neural networks by Rosenblatt, as well as the discovery of the XOR problem by Minsky and Papert. The recent history includes the birth of deep learning and the dominance of models such as convolutional neural networks, deep belief networks, and recurrent neural networks.
What is the history of AI?5 answersArtificial intelligence (AI) has a history of nearly a century from its inception to the present day. The development of AI in different fields, including computer vision, natural language processing, and machine learning, follows a pattern from rules to statistics to data-driven methods. The concept of AI was inspired by science fiction and seminal works like Turing's Computing Machinery and Intelligence. The history of AI is closely tied to the evolution of technology, from early computers to the creation of the Internet. Over time, the field of AI has been driven by the formalization of reasoning and learning mechanisms and the design of machines with autonomous capabilities. The achievements in AI have not only impacted the scientific community but have also influenced various forms of human creativity, including literature and film.
What is neural networks? explain deeply with references?5 answersNeural networks are electronic models inspired by the structure and functioning of the brain. They consist of simple units called neurons that communicate with each other through activation signals. Neural networks have the ability to learn from experience and can solve problems that are beyond the scope of traditional computers. They can extract patterns and detect complex trends from imprecise or complicated data. Neural networks have been widely used in various applications such as forecasting and business classification. They learn by updating their architecture and connection weights based on training patterns or input-output relations. Artificial neural networks can be applied to solve real-world problems that are too complex for conventional technologies or do not have algorithmic solutions.
What is History of AI ?3 answersThe history of AI can be traced back to the origins of artificial intelligence computing in 1956. Initially, AI focused on the idea of intelligence as a computer program, using algorithms to process symbols. However, in the 1980s, AI expanded to include the study of the interaction between the body, brain, and environment, leading to a broader understanding of intelligence. This new perspective emphasized embodiment and the emergence of intelligence from interaction, going beyond traditional AI to include intelligent action in various agents and systems. The history of AI also includes important milestones such as the introduction of theoretical concepts by Kurt Goedel in 1931, breakthroughs in artificial neural networks and deep learning, and the growing academic interest in AI rights and moral considerations for artificial entities. Overall, the history of AI has evolved from a focus on symbolic processing to a broader study of intelligence in various contexts.

See what other people are reading

What is the impact of connected vehicles on AI?
5 answers
Connected vehicles have a significant impact on Artificial Intelligence (AI) applications in various domains. By integrating Connected Vehicles (CVs) into AI-based systems, such as Adaptive Traffic Signal Control (ATSC) and Vehicle-to-Everything (V2X) networks, AI can optimize network decisions, enhance service quality, and improve traffic safety. The combination of AI and CVs enables real-time data collection, increased driver awareness, collision anticipation, and overall driving comfort and security. Moreover, AI-driven approaches in vehicular communication networks leverage AI and Machine Learning (ML) to automate tasks related to sensing, computing, and communication, leading to improved resource utilization and service enhancement. Ultimately, the integration of CVs with AI technologies paves the way for more efficient, reliable, and secure intelligent transportation systems and smart cities.
What are the most promising applications of dual-tracer PET and SPECT imaging?
4 answers
Dual-tracer PET and SPECT imaging offer promising applications in oncology and surgical planning. In oncology, dual-tracer PET-CT can aid in selecting treatments for prostate carcinoma, staging, prognostication, and predicting progression/response. Simultaneous dual-tracer PET imaging allows for observing two molecular targets in a single scan, enhancing disease diagnosis and tracking. In surgical planning for primary hyperparathyroidism, dual-tracer 99mTc-sestamibi/123I imaging significantly improves multiple gland disease detection, guiding minimally invasive parathyroidectomy and reducing surgical risks. Moreover, targeted dual-modal PET/SPECT-NIR agents show potential for preoperative patient selection, surgical planning, and intraoperative fluorescence-guided surgery, enhancing surgical workflow and patient outcomes. These applications highlight the diverse and promising roles of dual-tracer PET and SPECT imaging in precision medicine and oncological management.
How does the literature review link with the theoretical framework?
4 answers
The literature review serves as the foundation for establishing the theoretical framework. It involves a systematic analysis of existing research to identify gaps, trends, and key findings, which then inform the development of the theoretical framework. By reviewing relevant articles, researchers can extract constructs, theories, and dimensions that are crucial for understanding the phenomenon under study. This process helps in aligning the theoretical underpinnings with empirical evidence, ensuring a robust and comprehensive theoretical framework that guides the research methodology and analysis. Therefore, the literature review acts as a bridge between existing knowledge and the theoretical framework, providing a solid theoretical basis for the study.
What are some common techniques used to create user profiles for personalized recommendations in a recommendation system?
5 answers
Common techniques for creating user profiles in personalized recommendation systems include latent factor disentanglement via variational inference, concept-based user representations through retrieval and learning personalized concepts, and cluster user profile-based clustering processes based on user behavior models. Additionally, content-based filtering and collaborative filtering are widely used methods where content-based filtering relies on user preferences and keywords to suggest items, while collaborative filtering recommends items based on user or item similarities. These techniques aim to enhance recommendation quality and user control over the recommendations they receive, ultimately improving the overall user experience in recommendation systems.
What are the current advancements in using neural networks to improve pain management and promote compassionate care?
4 answers
Current advancements in pain management and compassionate care involve utilizing neural networks, specifically convolutional neural networks (CNNs), to predict pain states based on brain activity. These CNNs analyze phase-based connectivity in the alpha band recorded through EEG during different conditions, such as resting and sensory states, to accurately classify pain conditions with high accuracy. By leveraging deep learning classifiers, these models can help track pain states, potentially aiding in the rehabilitation of chronic pain patients and enabling dynamic modifications to therapy. Furthermore, the application of CNNs in critical care settings may assist in identifying pain-like states in unresponsive patients, enhancing overall pain management strategies and promoting compassionate care.
How have public policies influenced the development and implementation of Business Intelligence (BI) in various organizations?
5 answers
Public policies have played a significant role in shaping the development and implementation of Business Intelligence (BI) in organizations. Government initiatives have increasingly focused on leveraging BI projects to enhance services and decision-making within the public sector. Meanwhile, the rise of information technology has prompted organizations to rely on BI for data-driven insights to optimize their actions in response to the market. Research emphasizes the critical success factors for BI implementation, highlighting the need for organizations to concentrate resources on essential areas to maximize BI benefits. Additionally, studies have shown that factors like data management, relative profits, and IT infrastructure support influence the successful implementation of BI in non-profit organizations, emphasizing the importance of understanding these factors for effective BI utilization.
What are the key factors that determine the design and structure of value-added taxes (VAT) in different countries?
5 answers
The design and structure of value-added taxes (VAT) in different countries are influenced by various key factors. These include the principles of neutrality and destination-based taxation, leading to diverse VAT systems across OECD countries due to reduced rates, exemptions, and special regimes for specific economic sectors or social objectives. Factors influencing VAT compliance include international border controls, which correlate with VAT revenues and help combat domestic non-compliance, especially in countries with low institutional quality. Additionally, factors affecting VAT receipts encompass the VAT rate itself, economic indicators like imports and the service sector, administrative efficiency, and legal aspects such as government effectiveness and corruption control. The process of harmonization of VAT elements, particularly rates and benefits, varies in pace and efficiency across countries, with standard rates being more harmonized compared to preferential rates.
What are the most effective methods for assessing physical education in primary schools?
4 answers
Effective methods for assessing physical education in primary schools include a multidimensional approach focusing on behavioral, psychological, and physical components. This can involve assessing children's motivation, self-confidence, physical activity levels, knowledge of benefits, and physical fitness. Implementing student-centered pedagogies through nuanced understanding of students' needs and interests is crucial for effective assessment in early primary physical education. Additionally, utilizing a variety of data collection methods such as participation observation, video recording, map drawing, and photo elicitation can provide unique insights into students' needs and interests, aiding in the planning and implementation of student-centered approaches in physical education. Furthermore, incorporating formative assessment practices that consider students' attention levels can enhance the assessment process and contribute to improved academic development and comprehensive training of primary school students.
How computer vision is used for pavement inventory building?
5 answers
Computer vision plays a crucial role in pavement inventory building by enabling automated detection and analysis of road features. Various studies propose using computer vision systems for this purpose. For instance, one approach involves utilizing autonomous vehicles equipped with cameras and deep neural networks to monitor pavement defects in real-time, providing instant data on road conditions. Another method focuses on identifying road edges to determine road width automatically, aiding in pavement inventory construction. Additionally, computer vision systems can be employed to detect and classify pavement defects, serving as a pre-selection tool for detailed inspections and maintenance planning. These technologies streamline the process of pavement inventory building by leveraging advanced image analysis techniques for efficient data collection and analysis.
How computer vision is used for inventory building of bridges?
5 answers
Computer vision technology is utilized for inventory building of bridges through innovative approaches. For instance, computer vision is applied in waste analysis at construction sites, where algorithms detect waste in piling works using ground-level and drone images. Additionally, a Natural Language Processing framework is proposed to extract information from textual data in bridge inspection reports, aiding in system-level decision-making for bridge maintenance, rehabilitation, or replacement. Moreover, a structural identification framework incorporating computer vision-based measurements is presented for bridge damage assessment, introducing a displacement unit influence surface as a damage indicator. These diverse applications showcase the versatility of computer vision in enhancing inventory building and maintenance strategies for bridges.
How reinforcement learning can be used in financial market?
6 answers
Reinforcement Learning (RL) has emerged as a powerful tool in the financial market, offering innovative solutions across various domains. Its application ranges from portfolio optimization, where RL algorithms help in the allocation of assets to maximize returns while minimizing risk, to optimal execution strategies that aim to execute orders at the best possible prices, thereby reducing the cost of trading. The finance technology (Fintech) sector has particularly benefited from RL's ability to handle complex decision-making problems, including credit risk reduction, investment capital management, and profit maximization, showcasing RL's versatility in enhancing financial institutions' performance. Moreover, RL has been instrumental in developing algorithmic trading strategies. By leveraging deep reinforcement learning (DRL) models, researchers have explored the potential of these algorithms in trading stock markets, even though challenges such as the dynamic nature of financial datasets and the low signal-to-noise ratio persist. The introduction of FinRL-Meta, a data-centric library, further facilitates the training of financial RL agents by providing dynamic datasets from real-world markets, thus addressing some of these challenges. Quantitative trading has also seen the application of RL, where mathematical models and data-driven techniques are employed to analyze financial markets. RL's ability to solve complex sequential decision-making problems without heavy reliance on model assumptions makes it a promising approach in this domain. Additionally, RL's application extends to options pricing, market making, and robo-advising, demonstrating its capacity to improve decisions in complex financial environments with fewer model assumptions. In summary, RL's application in the financial market is multifaceted, ranging from portfolio optimization, algorithmic trading, to quantitative trading, among others. Its ability to learn and adapt to dynamic environments makes it a valuable tool for financial decision-making and optimization tasks.