| By Alexander Haidar |
Artificial Intelligence (AI) has been a rapidly evolving technological innovation of the 21st century whose full potential has yet to be realized. Recent advancements in machine learning has led to an AI model developed by researchers at MIT that can predict which patients are at high risk of developing lung cancer up to five years before a diagnosis would normally be made.[1] Likewise, AI has been used in the field of nuclear material studies to provide a streamlined data collection and analysis process.[2] While it has a long way to go in terms of being able to provide unique and new knowledge, AI has already shown to be a significantly efficient data-processing tool.
Recent popularity in Natural Language Processing (NLP) AI such as OpenAI’s ChaptGPT (recently acquired by Microsoft) has sparked new interests, as well as concerns about the future of a world dominated by artificial intelligence. NLP models like ChatGPT can be useful tools to process input information and provide a comprehensive explanation, analysis, or conclusion in a linguistically attractive medium which we understand easily (ie. full sentences or paragraphs). GPT stands for Generative, Pre-trained, Transformer, meaning that the software is capable of generating new content based on its database of information, learning and adapting to human language.[3]
NLP models have recently been presenting an unforeseen risk that such written responses may produce biased or inaccurate results if the software is not properly trained or validated. Reports of offensive language and racially biased speech in ChatGPT recently prompted an OpenAI blog post outlining their AI training process, as well as offering “a portion of guidelines pertaining to political and controversial topics.”[4] In order to ensure that NLP models are used in a responsible and ethical manner, software developers such as OpenAI must remain vigilant in reviewing its software training, while also allowing for users to detect and indicate when they find potentially harmful biases or hate speech.
ChatGPT’s growing popularity has also led to a new debate on the usage of AI in school settings. Many teachers’ and professors’ initial reactions have been to dismiss what they see as a way for students to avoid putting in the effort for assignments — a valid and proven concern as it essentially can write reading summaries, analyses, and essays without the student being familiar with the material. ChatGPT writing has even made it all the way to the United States House of Representatives where Massachusetts Congressman Jake Auchincloss (D – MA 4th Congressional District) gave the first-ever speech on the floor written by AI.[5]
While AI is expected to create new job opportunities in fields such as software engineering, data science, and machine learning, it may also lead to displacement of workers in other industries as automation and AI technology increasingly take over tasks that were previously performed by humans. Jobs which require significant human interaction and a high degree of emotional intelligence such as healthcare, education, and the arts are less likely to be automated. These industries therefore have the opportunity to benefit from NLP AI, as discussed by four Boston College (B.C.) professors at a forum organized by the Institute for the Liberal Arts. One presenter who runs the first-year writing program at B.C. alluded to the opportunities of using ChatGPT as a teaching tool during the writing process. She also proposed a new pedagogical approach whereby AI could be used to enhance and change the way writing is taught and appreciated; now that thematic or analytical essay writing can be easily re-produced by AI, professors should now be encouraged to focus on making the process of writing engaging using new approaches.
While there is the fear of AI displacing human workers from labor-intensive industries, the recent popularity and widespread usage of NLP has indicated that problem-solving, critical thinking, and communication skills are what AI cannot replace. Of course, calibrating NLP to be sensitive to complex human-formed concepts like race or sexuality will require a thorough and dynamic process to prevent harmful ideas from propagating through AI. If used properly, NLP will likely become implemented as a tool towards enhancing unreplicatable human capabilities.
[1] Ouyang, Alex. “MIT Researchers Develop an AI Model That Can Detect Future Lung Cancer Risk.” MIT News, Massachusetts Institute of Technology, 20 Jan. 2023, https://news.mit.edu/2023/ai-model-can-detect-future-lung-cancer-0120.
[2] Dean, Kristen Mally. “Artificial Intelligence Reframes Nuclear Material Studies.” Tech Xplore – Technology and Engineering News, Argonne National Laboratory, 16 Feb. 2023, https://techxplore.com/news/2023-02-artificial-intelligence-reframes-nuclear-material.html.
[3] Institute for the Liberal Arts, and Center for Teaching Excellence. “Chat GPT: Implications for Teaching and Learning.” Youtube. Boston, MA, Boston College, https://www.youtube.com/watch?v=GUnq9EihSt4&t=12s&ab_channel=InstitutefortheLiberalArtsatBostonCollege. Accessed 21 Feb. 2023.
[4] “How Should AI Systems Behave, and Who Should Decide?” OpenAI, OpenAI, 16 Feb. 2023, https://openai.com/blog/how-should-ai-systems-behave/.
[5] LeBlanc, Steve. “Massachusetts Congressman Reads AI-Generated Speech on House Floor.” WBUR News, WBUR, 26 Jan. 2023, https://www.wbur.org/news/2023/01/26/auchincloss-chatjpt-ai-artificial-intelligence.