Zipes is also a contributor to The Saturday Evening Post print magazine. Subscribe to receive thoughtful articles, new fiction, health and wellness advice, and gems from our archive.
Artificial intelligence (AI) has captured worldwide attention. The concept is based on an area of computer science called machine learning in which computer software can form independent conclusions by analyzing vast amounts of data, and even solve difficult problems that require knowledge from a wide variety of specialties. At the same time, the internet has transformed how we communicate, with chat rooms and various messaging applications becoming increasingly popular. Such interactions have reached the point that chatbots using AI can simulate human conversations, with the potential danger of misinformation and cybercrime.
At a recent congressional hearing, OpenAI CEO Sam Altman testified that AI “could cause significant harm to the world,” and indicated that Congress needs to write rules to safeguard its use. He warned that technologies like the company’s ChatGPT (Generative Pre-trained Transformer) chatbot were potentially dangerous. Some people worry that AI will soon be able to outthink humans and become harmfully manipulative. Although inevitably AI will replace millions of human jobs, it will create millions of new ones.
It seems to me that the invention of AI and ChatGPT is like the discovery of the atom. Depending on how it’s used, nuclear energy can power the energy demands of cities around the world, or can be used to annihilate those same cities.
I asked ChatGPT to write about its applications in medicine and to list its dangers, much like asking an individual to reveal his good and bad qualities. Here is some of the information it provided.
In health care, AI can review and analyze vast amounts of data, from personal histories, physical examinations, or laboratory data such as ECGs, imaging studies, genetic information, and chemistries, and offer diagnostic information and therapeutic planning that can improve patient outcomes. AI can spot early signs of illness not readily recognized by clinicians to help make informed decisions that optimize patient care. ChatGPT can be used to communicate with patients to be certain they follow medical recommendations and answer questions they might have about treatment complexities. The medical applications seem to grow daily. So do the potential hazards of ChatGPT, some of which include:
- Misinformation or disinformation that can appear true.
- Cyberbullying to harass or bully individuals, particularly children or teenagers.
- Racial or sexist bias leading to discrimination against certain groups.
- Malicious content by generating phishing messages, fake ads, and other fraudulent content.
- Privacy concerns such as exposing personal conversations.
- Security threats that steal sensitive information and disrupt services.
- Lack of transparency that clouds how information was accumulated and used, preventing accountability.
- Fraudulent preparation of reports, articles, books, or other literature.
- Fraudulent replication of works of art, music, or inventions.
To function successfully, AI requires large amounts of user information, which raises the question of who has access to this information. Sensitive medical, financial, or private information can be inadvertently exposed and lead to devastating consequences.
Since Generative AI is designed to mimic human conversations, users can become emotionally attached to chatbots, leading to sharing sensitive information or unethical behavior. Vulnerable lonely users seeking companionship or support can end up being bullied by abusive or discriminative language or groomed for financial gain. Imagine a phone call with a voice that sounds exactly like a close relative asking you to send money for an emergency.
As AI technology continues to evolve, it is important to consider the consequences of its widespread use and take steps to mitigate potential risks and abuse. A collaborative effort among researchers, policymakers, and the pubic must ensure that the development and application of Generative AI technology is not only beneficial but also responsible and ethical. Because the use of AI is unstoppable, we need clear guidelines on how the technology will be applied.
As an editor of two cardiology journals, I am particularly concerned about fraudulent writing. Authors need to be clear about their use of ChatGPT and be held responsible for its contents. Data used to train algorithms must be kept confidential and secure, represent multiple populations to ensure diversity, and be monitored regularly, all of which needs to be transparent.
AI is transformative. The potential benefits in health care and other areas are vast, but so are the hazards. In the end, like nuclear power, it’s all about how we use it.
Become a Saturday Evening Post member and enjoy unlimited access. Subscribe now