Lessons from ChatGPT’s first year

Advertisement

On November 30, 2022, ChatGPT was introduced and became a pivotal moment for artificial intelligence. Its popularity soared shortly after its launch, with conversations being widely shared on social media. The unexpected level of usage by users surpassed OpenAI’s expectations. By January, ChatGPT was generating a staggering 13 million unique visitors daily, setting a remarkable record as the consumer application with the fastest-growing user base.In this eventful year, ChatGPT has demonstrated the significance of a well-designed interface and the dangers of exaggerated claims, while also contributing to the development of novel human actions. As a scholar specializing in the examination of technology and human information behavior, I perceive that ChatGPT’s impact on society stems not only from its technological capabilities but also from people’s perceptions and utilization of it.AI systems such as ChatGPT are being widely adopted. Ever since the release of ChatGPT, discussions, presentations, and articles have frequently included references to AI. OpenAI has recently announced that ChatGPT is being used by 100 million individuals on a weekly basis.In addition to individuals using ChatGPT at home, employees at various levels within businesses, including top executives, are also utilizing this AI chatbot. In the field of technology, generative AI is being touted as a groundbreaking platform comparable to the introduction of the iPhone in 2007. All the major industry players are heavily investing in AI, and funding for AI startups is experiencing a significant upswing.Throughout its existence, ChatGPT has sparked various worries regarding its potential impact on disinformation, fraud, intellectual property conflicts, and discrimination. In the realm of higher education, a substantial amount of attention has been devoted to discussing the issue of cheating, which has been the primary focus of my personal research endeavors this year.The primary testament to the effectiveness of a well-designed interface is the remarkable achievement of ChatGPT. For more than a decade, AI has been integrated into numerous popular products like Spotify, Netflix, Facebook, and Google Maps. The inception of GPT, the underlying AI model driving ChatGPT, can be traced back to 2018. Despite OpenAI’s previous releases like DALL-E, none captured the same level of attention and excitement as ChatGPT did upon its launch. It was the introduction of a chat-based interface that truly propelled AI into the spotlight in that pivotal year.Chat possesses a distinctive allure. Language is a gift to humans, and engaging in conversation is a fundamental means by which individuals interact and gauge intellect. Utilizing a chat-based interface feels instinctive, providing a platform for people to encounter the “intelligent” capabilities of an AI system. The remarkable triumph of ChatGPT reaffirms the notion that user interfaces are pivotal in facilitating the widespread acceptance of technology, as seen in the cases of the Macintosh, web browsers, and the iPhone. The key lies in effective design.Simultaneously, the ability of technology to produce convincing language, which is one of its main advantages, also renders it highly compatible for generating deceptive or misleading content. Both ChatGPT and other generative AI systems facilitate criminals and propagandists in exploiting human susceptibilities. The capacity of this technology to amplify fraud and dissemination of false information is a significant justification for the regulation of AI.In the midst of the genuine possibilities and dangers posed by generative AI, the technology has also presented yet another example of the influence of exaggerated claims. Throughout this year, there has been an abundance of articles proclaiming that AI will revolutionize every facet of society and that the widespread adoption of this technology is unavoidable.While there have been previous instances of technologies being hailed as the “next big thing,” ChatGPT stands out due to the additional hype surrounding its potential as an existential threat. Many influential figures in the tech industry, as well as researchers in the field of artificial intelligence, have expressed concerns about the risks associated with the emergence of superintelligent AI systems that could harm or eradicate humanity. However, I personally view these concerns as exaggerated and unlikely.The media landscape promotes exaggerated claims, and the current atmosphere of venture funding intensifies the exaggeration surrounding AI in particular. Appealing to people’s aspirations and concerns only leads to anxiety without providing the necessary elements for making sensible choices.After a surge in 2023, the progress of AI may experience a deceleration in the following year. Technical constraints and challenges related to infrastructure, such as the production of chips and availability of server capacity, are likely to hinder AI advancements. At the same time, there may be an increasing focus on regulating AI development.The decrease in speed will allow for the establishment of norms in human behavior, covering both etiquette (such as determining when and where it is socially acceptable to use ChatGPT) and effectiveness (such as identifying the specific situations where ChatGPT is most beneficial).Generative AI systems like ChatGPT will become integrated into people’s work processes, enabling them to complete tasks more efficiently and with reduced mistakes. Just as individuals have learned to use “google” as a verb for searching information, humans will need to adopt novel approaches in utilizing generative AI tools for their work.While the outlook for 2024 may seem positive in several aspects, there are concerns that cannot be overlooked. This particular year is expected to witness significant elections worldwide, and there is a high possibility that artificially generated content will be employed to manipulate public sentiment and foster discord. Despite Meta’s prohibition on the utilization of generative AI in political advertisements, it is unlikely to prevent the usage of tools like ChatGPT to fabricate and disseminate inaccurate or deceptive information.Political disinformation proliferated through social media in both 2016 and 2020, and it is highly probable that generative AI will be utilized to perpetuate these activities in 2024. Additionally, conversations with ChatGPT and comparable platforms have the potential to generate their own misinformation, even beyond the realm of social media.Consequently, an additional takeaway that applies to everyone, whether they use ChatGPT or not, is the importance of being cautious and aware when dealing with various forms of digital media in the technology’s second year.

Advertisement
Advertisement