scispace - formally typeset
Search or ask a question
Topic

Chatbot

About: Chatbot is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 24372 citations. The topic is also known as: IM bot & AI chatbot.


Papers
More filters
Journal ArticleDOI
TL;DR: Gilat et al. as mentioned in this paper used ChatGPT to write parts of newspaper and magazine articles, testing readers ability to determine whether the writing was by the chatbot or a human, and the results showed that the bot was subject to major errors and biases.
Abstract: As is evident in a Letter to the Editor from Gilat and Cole, “How Will Artificial Intelligence Affect Scientific Writing, Reviewing and Editing? The Future is Here …”,1Gilat R. Cole B. How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39: 1119-1120Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar ChatGPT is now impacting the medical literature. As ChatGPT was used to write part of the letter, we can say with certainty that ChatGPT is now published in Arthroscopy. ChatGPT is an artificial intelligence (AI) chatbot tool. In other words, ChatGPT is a machine, a program, a robot, or technically, a large language model trained on enormous amounts of information from the Internet. It is able to respond to user prompts by answering questions, writing essays, poems, love letters, computer code, or business plans; it can solve problems, including math or physics; and more.2New York TimesNew chatbots can change the world. Can you trust them?.https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html?smid=nytcore-ios-share&referringSource=articleShareDate accessed: January 11, 2023Google Scholar, 3CNN Business NewsNew York City public schools ban access to AI tool that could help students cheat.https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.htmlDate accessed: January 11, 2023Google Scholar, 4New York TimesA new era of AI blooms even amid the tech gloom.https://www.nytimes.com/2023/01/07/technology/generative-ai-chatgpt-investments.html?smid=nytcore-ios-share&referringSource=articleShareDate accessed: January 11, 2023Google Scholar “The bot doesn’t just search and summarize information that already exists. It creates new content, tailored to your request.”5New York TimesDid a fourth grader write this? Or the new chatbot?.https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.htmlDate accessed: January 11, 2023Google Scholar ChatGPT has been used to write parts of newspaper and magazine articles, testing readers ability to determine whether the writing was by the chatbot or a human,4New York TimesA new era of AI blooms even amid the tech gloom.https://www.nytimes.com/2023/01/07/technology/generative-ai-chatgpt-investments.html?smid=nytcore-ios-share&referringSource=articleShareDate accessed: January 11, 2023Google Scholar so it is no surprise that Editorial Board Member Ron Gilat and Journal Board of Trustees Member and AANA Past President Brian Cole have now used the bot to write part of their letter.1Gilat R. Cole B. How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39: 1119-1120Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar Gilat and Cole caution that ChatGPT is subject to “major errors and biases,” reiterating lay press reports that the chatbot has a “misinformation problem,”5New York TimesDid a fourth grader write this? Or the new chatbot?.https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.htmlDate accessed: January 11, 2023Google Scholar does not always tell the truth,5New York TimesDid a fourth grader write this? Or the new chatbot?.https://www.nytimes.com/interactive/2022/12/26/upshot/chatgpt-child-essays.htmlDate accessed: January 11, 2023Google Scholar could be “weaponized to spread disinformation,”6New York TimesHow A.I. could be used to spread disinformation.https://www.nytimes.com/interactive/2019/06/07/technology/ai-text-disinformation.html?action=click&module=RelatedLinks&pgtype=ArticleDate accessed: January 11, 2023Google Scholar and could be used to create “deepfakes.”7Wall Street JournalChina, a pioneer in regulating algorithms, turns its focus to deepfakes.Date accessed: January 11, 2023Google Scholar Gilat and Cole further caution that, “As authors, we need to make sure we do not use these tools to compose any part of a scientific work until these tools are … validated … and … perfectly accurate … (and) limited to specific tasks that do not compromise the integrity and originality of the work, and be subjected to meticulous human supervision.” This is consistent with China’s internet regulator, the Cyberspace Administration of China, enforcing regulation of “what it calls “deep synthesis” technology, including AI-powered image, audio, and text-generation software [from] from spreading “fake news”, or information deemed disruptive to the economy or national security,” and requiring “providers of deep synthesis technologies, including companies, research organizations and individuals, to prominently label images, videos, and text as synthetically generated or edited when they could be misconstrued as real.”7Wall Street JournalChina, a pioneer in regulating algorithms, turns its focus to deepfakes.Date accessed: January 11, 2023Google Scholar Similarly, New York City public schools have banned ChatGPT from the district’s networks and devices (with the caveat that schools can request “access to the tool for AI and tech-related educational purposes”). Los Angeles, San Francisco, and Philadelphia school districts are grappling with the issue,3CNN Business NewsNew York City public schools ban access to AI tool that could help students cheat.https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.htmlDate accessed: January 11, 2023Google Scholar and, as suggested by Gilat and Cole,1Gilat R. Cole B. How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39: 1119-1120Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar “some products such as Turnitin—a detection tool that thousands of school districts use to scan the internet for signs of plagiarism—are now looking into how its software could detect the usage of AI-generated text in student submissions.”3CNN Business NewsNew York City public schools ban access to AI tool that could help students cheat.https://www.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.htmlDate accessed: January 11, 2023Google Scholar And so readers … did you suspect that the first half of the Letter by Gilat and Cole was written by a bot? I didn’t immediately suspect so. And yet, I was suspicious that something was amiss because: 1) ChatGPT was uncapitalized and spaced incorrectly; 2) the authors failed to initially explain to readers what ChatGPT is and does; 3) there were no references, whereas in many places, references were obviously required; and finally, 4) the last paragraph of the text written by the bot, plus the sentence prior to the last paragraph, added absolutely nothing that hadn’t already been said and should have been deleted as redundant. In my experience as editor, while some novice authors might make some or all of these mistakes, Gilat and Cole are well known to me as frequent, prize-winning authors and editors. I was surprised they had submitted such a poorly written letter! However, as I read on, I realized the ruse. Well done Ron and Brian! A few other comments. Just as the Internet can contain distortions, resulting in ChatGPT committing errors, user input can also direct chatbot misinformation. According to the Recommendations of the International Committee of Medial Journal Editors, it is authors who bear “responsibility and accountability for published work.”8International Committee of Medial Journal Editors. Responsibilities in the submission and peer-review process.https://www.icmje.org/recommendations/browse/roles-and-responsibilities/responsibilities-in-the-submission-and-peer-peview-process.html#threeDate accessed: January 11, 2023Google Scholar While we reviewers and editors do our best to make articles better and do our very best to detect errors or biases, authors are accountable for their work. Yet, Gilat and Cole instructed the bot to provide “insight on the effect of artificial intelligence and tools” and write on “what reviewers and editors should do to adjust and maintain the high scientific standards of the journal” and as a result, the chatbot wrote, “It is therefore important for reviewers and editors to carefully check the articles they review for any errors or biases that may have been introduced by AI tools.”1Gilat R. Cole B. How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39: 1119-1120Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar Again, reviewers, editors, and our learned readers, must certainly “consider the scientific accuracy and validity”1Gilat R. Cole B. How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39: 1119-1120Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar of submissions and publications, but from an editorial standpoint, it is authors who are ultimately responsible for checking their articles for errors and bias, whether as a result of AI or otherwise.8International Committee of Medial Journal Editors. Responsibilities in the submission and peer-review process.https://www.icmje.org/recommendations/browse/roles-and-responsibilities/responsibilities-in-the-submission-and-peer-peview-process.html#threeDate accessed: January 11, 2023Google Scholar Gilat and Cole, not the bot, also wrote that authors should “not use these tools to compose any part” of a scientific submission, and wrote that in the future, the tools might be used for “specific tasks that do not compromise the integrity and originality of the work and be subjected to meticulous human supervision.”1Gilat R. Cole B. How will artificial intelligence affect scientific writing, reviewing and editing? The future is here….Arthroscopy. 2023; 39: 1119-1120Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar Personally, I’m not so sure authors need to wait to use ChatGPT. Regardless, whether now or later, I agree 100% that authors need to provide “meticulous human supervision” of the chatbot tool, whether this supervision is in the form of a rudimentary spelling check, addition of relevant references, or scholarly review to insure the absence of errors and bias. Finally, I think the Cyberspace Administration of China’s idea that authors should label images, videos and text as synthetically generated, in whole or in part,7Wall Street JournalChina, a pioneer in regulating algorithms, turns its focus to deepfakes.Date accessed: January 11, 2023Google Scholar is a good idea. I plan to review this with my fellow editors, and I have queried our publisher as to their view of such a potential policy. Finally, I very much appreciate the forward thinking, academic leadership of Drs. Gilat and Cole. Their letter is stimulating and inspires substantial consideration, due diligence, and a great deal of learning, and should ultimately result in improving our journals. Very respectfully, Download .docx (.03 MB) Help with docx files ICMJE author disclosure forms

18 citations

Book ChapterDOI
03 Jun 2019
TL;DR: The Jarvis framework is introduced, that provides a Domain Specific Language (DSL) to define chatbots in a platform-independent way, and a runtime engine that automatically deploys the chatbot application and manages the defined conversation logic.
Abstract: Chatbot applications are increasingly adopted in various domains such as e-commerce or customer services as a direct communication channel between companies and end-users. Multiple frameworks have been developed to ease their definition and deployment. They typically rely on existing cloud infrastructures and artificial intelligence techniques to efficiently process user inputs and extract conversation information. While these frameworks are efficient to design simple chatbot applications, they still require advanced technical knowledge to define complex conversations and interactions. In addition, the deployment of a chatbot application usually requires a deep understanding of the targeted platforms, increasing the development and maintenance costs. In this paper we introduce the Jarvis framework, that tackles these issues by providing a Domain Specific Language (DSL) to define chatbots in a platform-independent way, and a runtime engine that automatically deploys the chatbot application and manages the defined conversation logic. Jarvis is open source and fully available online.

17 citations

Proceedings ArticleDOI
27 Mar 2019
TL;DR: The project, JARO chatbot mainly intends to streamline the process of hiring employees by proposing a chatbot that would conduct interviews by analyzing the candidates Curriculum Vitae (CV), based on which, it then prepares a set of questions to be asked to the candidate.
Abstract: Recently, recruiters have find the taxing to communicate with all their candidates about the interview process and it results in the hassle of conducting interviews. Also, in cases, where there are large volumes of applicants, communicating with thousands of candidates and conducting other screening duties add to the heap of recruitment problems. The proposed system, JARO addresses the common concerns that a candidate faces when it comes to attend the mass interviews. Some of the challenges faced are inconsistency in questions, different days, different times of the day, interviewer’s mood, venue of the interview and the list goes on. Therefore, JARO accelerates the interview process towards an unbiased decision-making process by proposing a chatbot that would conduct interviews by analyzing the candidates Curriculum Vitae (CV), based on which, it then prepares a set of questions to be asked to the candidate. The system will consist of features like resume analysis and automatic interview processes. The software would also ask questions based on the previous responses of the candidate by utilizing a Natural Language Processing (NLP) model which is very helpful in this process. After the interview process, the software would analyze the data collected to determine the right choice for the position offered. Thus, the project, JARO chatbot mainly intends to streamline the process of hiring employees.

17 citations

Book ChapterDOI
17 Sep 2013
TL;DR: This work describes a hybrid method where conversational trees are developed for specific types of conversations, and then through the use of a bespoke scripting language, called OwlLang, domain knowledge is extracted from semantic web ontologies allowing an evolving knowledge base.
Abstract: Traditionally conversational interfaces, such as chatbots, have been created in two distinct ways. Either by using natural language parsing methods or by creating conversational trees that utilise the natural Zipf curve distribution of conversations using a tool like AIML. This work describes a hybrid method where conversational trees are developed for specific types of conversations, and then through the use of a bespoke scripting language, called OwlLang, domain knowledge is extracted from semantic web ontologies. New knowledge obtained through the conversations can also be stored in the ontologies allowing an evolving knowledge base. The paper describes two case studies where this method has been used to evaluate TEL by surveying users, firstly about the experience of using a learning management system and secondly about students' experiences of an intelligent tutor system within the I-TUTOR project.

17 citations

Proceedings ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed to learn implicit user profiles automatically from large-scale user dialogue history for building personalized chatbots. But, the restricted predefined profile neglects the language behavior of a real user and cannot be automatically updated together with the change of user interests.
Abstract: Personalized chatbots focus on endowing chatbots with a consistent personality to behave like real users, give more informative responses, and further act as personal assistants. Existing personalized approaches tried to incorporate several text descriptions as explicit user profiles. However, the acquisition of such explicit profiles is expensive and time-consuming, thus being impractical for large-scale real-world applications. Moreover, the restricted predefined profile neglects the language behavior of a real user and cannot be automatically updated together with the change of user interests. In this paper, we propose to learn implicit user profiles automatically from large-scale user dialogue history for building personalized chatbots. Specifically, leveraging the benefits of Transformer on language understanding, we train a personalized language model to construct a general user profile from the user's historical responses. To highlight the relevant historical responses to the input post, we further establish a key-value memory network of historical post-response pairs, and build a dynamic post-aware user profile. The dynamic profile mainly describes what and how the user has responded to similar posts in history. To explicitly utilize users' frequently used words, we design a personalized decoder to fuse two decoding strategies, including generating a word from the generic vocabulary and copying one word from the user's personalized vocabulary. Experiments on two real-world datasets show the significant improvement of our model compared with existing methods. Our code is available at this https URL

17 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
79% related
Mobile computing
51.3K papers, 1M citations
78% related
Social media
76K papers, 1.1M citations
78% related
Encryption
98.3K papers, 1.4M citations
76% related
Web service
57.6K papers, 989K citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023916
20221,413
2021564
2020617
2019528
2018326