Rewriting of the E/LA Standards
Can you think of an instructional task in any setting that doesn’t involve the use of language in some form or another? Every teacher in the K-12 space, no matter the assignment or credential, is a teacher of English/Language Arts.
Now look at the words that surround the use of generative AI. Tools such as Bard or ChatGPT are examples of large-language models (LLMs). They have been trained by ingesting nearly every written word that has ever been posted online. These LLMs are often called chatbots. When you initiate a task with ChatGPT you are asked to start a new “chat.” The words you type into the dialogue box are classified as a “prompt.” Fundamentally, interaction with an LLM is language based.
This intimate relationship between the language-based essence of education and the very nature of generative AI calls for a revision of the English/Language Arts standards now in use.
In prior blogs in this series for the New Tech High School Center for Excellence I have called for a rewrite of the requisite skills needed for success in education, work and the community. These skills were most notably codified by the Partnership for 21st Century for Learning, but they, like the English/Language Arts Standards, require revision due to the advent of LLMs.
Calls for AI Regulation in Education and Policy
In September, UNESCO released a position statement in which it called for governments to regulate generative AI in schools. UNESCO cites data from a global survey of more than 450 schools and universities that shows less than 10 percent had either policies or formal guidance concerning the use of generative AI.
The Modern Language Association and the Conference on College Composition and Communication formed a task force that produced in July a working paper that provides recommendations on the ethical and practice rollout of generative AI in schools.
Cornell University put together a task force that generated in July a report called Generative Artificial Intelligence for Education and Pedagogy. The executive summary includes a sentence that I think should drive our discussion about revision of the E/LA standards: “Educators must take generative artificial intelligence (GAI) into account when considering the learning objectives for their classes, since these technologies will not only be present in the future workplace, but are already being used by students.”
Learning Objectives and Standards
What else are the Common Core State Standards (or your state’s version of them) but a list of learning objectives? The standards tell us what students should know and be able to do. I can’t envision an educational future in which students don’t know the ethical and effective use of generative AI. Or, a future in which students don’t know how to complete tasks in design, creativity, thinking, writing, analysis, and organization without the use of generative AI as a collaborative partner.
Let’s examine California’s College and Career Readiness Anchor Standards for Writing. Under the category of Production and Distribution of Writing, we find Standard 5: “Develop and strengthen writing as needed by planning, revising, editing, rewriting or trying a new approach.” And Standard 6: “Use technology, including the internet, to produce and publish writing and to interact and collaborate with others.”
If I were of a legalistic mind I could argue that the standards already allow for the use of generative AI to accomplish these outcomes. I’m confident a collaborative writing process that includes ChatGPT would constitute “a new approach.” Obviously, that was not the original intent of the authors. That’s why we need, at a minimum, addendums or revisions or there will be a bunch of “new approaches” that involve unapproved use of generative AI.
We Need New Definitions of Plagiarism and Paraphrasing
Plagiarismcheck.org conducted a study that indicates Google’s Bard was responsible for 45% of the plagiarism it detected in writing samples. Turnitin.Com has processed 88 million papers since April, detecting AI-written content (between 20-80 percent) in 12 million of those papers.
The most troubling element of this automated approach to plagiarism detection, as detailed in a Stanford study, is that one automated detector “may penalize non-native writers with limited linguistic expressions.” According to the Migration Policy Institute,10 percent of K12 students nationally are classified as Multi Language Learners. Much already has been made of the inherent bias in LLMs. A pattern of plagiarism detection that penalizes MLLs would hamper the inevitable rollout of generative AI in schools.
Accordingly, the call for revision of the standards must include new, robust definitions of what constitutes plagiarism as well as paraphrasing.
And what if, as the APA suggests, you follow its bibliographic style to cite ChatGPT: “Create an APA reference entry that lists OpenAI as the author and ChatGPT as the title, adding the date of the version used (shown at the bottom of the page on the ChatGPT site), the descriptive text “Large language model” in square brackets, and the URL. Does that absolve you of plagiarism?
Where do we go from here?
The task before us is difficult. Generative AI presents an instructional and ethical challenge that far surpasses the launch of the Internet and the arrival of cell phones. Rewriting the standards and codifying updated definitions of key terms will allow the development of policies that are fair and reasonable for all learners. Let’s get going.
David Ross (@davidPBLross) is the retired CEO of the Partnership for 21st Century Learning and the former Senior Director of the Buck Institute for Education (now PBLWorks). David was an 11th grade American Studies (History and English 11) team teacher. David created curriculum design templates, exemplary projects, rubrics for critical thinking and collaboration, and project management techniques.
