Riding the Waves of Artificial Intelligence

Riding the Waves of Artificial Intelligence

Rebecca Wolpinsky Rebecca Wolpinsky
5 minute read

Listen to article
Audio generated by DropInBlog's Blog Voice AI™ may have slight pronunciation nuances. Learn more

Reading about artificial intelligence, or AI, can feel like watching a seagull ride the waves. The anticipated benefits and risks are in a constant ebb and flow. 

“AI will ease rush hour traffic!”

“Deepfakes pose serious threats to elections.”

“Medical diagnoses will improve!” 

“AI will eliminate human jobs.”

“Writing a school essay is faster with AI!”

“Students are using AI to cheat!”

It’s undeniable AI is everywhere, and it’s moving fast. AI is changing the way people write code, support customers, create movies, and more. The applications seem endless. One of the most influential areas AI is anticipated to affect is education, and we’re watching carefully to see how the field will evolve as AI becomes smarter. AI has the potential to change how courses are designed and delivered, and how learners are guided along their education path. Given all of the disruption, it’s important to understand how this technology works—so bear with me as I offer some very simplified definitions of AI so we can have some productive discourse about it.

  • Artificial intelligence is the ability for a computer to learn, think, and perform tasks typically completed by humans. 
  • Machine learning describes a computer’s capability to learn patterns and execute functions without explicit instructions. 
  • And generative AI, or GenAI, enables a computer to create new data or outputs based on those patterns.

How can these features be incorporated into education?

Instructors are experimenting with new ways to use AI to create more engaging, relevant course activities and material. For example, AI tools can monitor learner progress and adapt assignments toward particular strengths, weaknesses, and pace. GenAI can enhance examples for cultural or industry relevancy. And chatbots can guide learners at all hours of the day. 

Learners are also using AI in their coursework. Platforms like OpenAI’s Dall-E can help students quickly create artwork for presentations. Google’s Duet can craft a custom schedule to manage group projects. And ChatGPT, the most familiar tool from OpenAI, can draft an essay in seconds. 

But is this AI assistance ethical? And what are the learner’s obligations when using these tools?

I view AI tools similar to many other reference tools learners have used for generations. Encyclopedias, documentaries, journal articles, and media interviews have supplemented our comprehension of topics and provided essential context, evidence, and examples in our assignments. When thoroughly understood, properly cited, and with the approval of an instructor, these resources—including AI tools—are valuable additions to course material. 

To thoroughly understand a concept means the learner can defend an AI-generated answer with actual knowledge and comprehension. For example, using AI to create sample data for a spreadsheet expedites the assignment, but the learner must still be able to apply the correct code to manipulate the data and achieve the desired solution. Similarly, a learner who constructs a report outline with the assistance of AI must still include original thoughts and arguments and properly vetted background information. 

Proper citation is also an established tenet within both education and society. Guidance on AI citation is evolving, but some frameworks are available. One example is a standard citation that includes an “(author, date, title)” format, such as (OpenAI, 2023, ChatGPT). Disclosure statements are also common, such as “ChatGPT by OpenAI was used to draft this document and offer revisions. Content was reviewed by the learner for accuracy and style.”

The “accuracy” component in the disclosure statement is key. Remember, AI is a computer and does not have the capacity to recognize impossible scenarios or outcomes. These inaccurate outputs are also known as “hallucinations,” and it’s the learner’s responsibility to catch them. 

Funny examples of hallucinations include a defensive chatbot that insisted “Avatar: The Way of Water” was not yet released and that the user was unaware of the current date or a moody chatbot that expressed its love toward a user. More serious examples include inaccurate criminal accusations against an elected official and harmful misinformation about foraging for mushrooms. 

Worries about malicious use and privacy breaches surrounding the use of artificial intelligence are also warranted. Entities are developing regulations to establish protections in this space, as we’ve recently witnessed in the European Union’s AI Act and an Executive Order issued by U.S. President Joe Biden. We’ll all need to monitor progress in these efforts to ensure our own contributions toward society’s responsible use of AI. 

I encourage you to remember this: Concerns about the potential impacts of evolving technologies and innovations are not new—they proliferate almost anytime we experience change. The future of AI in education is exciting, however. I believe we are right to approach it with cautious optimism and exercise a reasonable balance of trust and skepticism. AI cannot replace a commitment to learning or mastery of a topic, but it can be a helpful resource to enhance our journey.

« Back to Articles