Academic Motivation: The Missing Learning Outcome We Rarely Measure
In a classroom setting, let’s define motivation for the moment as the voluntary drive to engage in deep academic learning. Deep learning is nearly impossible to accomplish without strong motivation—and motivation itself is to some degree learned, not innate. We learn to be motivated from our experiences by what rewards our inner drives, even when those pursuits demand effort and inconvenience. When we genuinely want to do something, we’ll move mountains to make it happen.
This shouldn’t be controversial. It’s common sense, easily verified by a moment’s reflection on your own life. Think about the last time you were truly motivated to master something—a hobby, a skill, a creative project. You didn’t need external incentives. The work and the sense of building capacity itself was the reward.
Yet in most public school classrooms, we treat motivation either as a stable personality feature or as something we manipulate rather than cultivate. We rely on grades, points, approval, credit—external currencies completely unrelated to the joy or satisfaction of improving in an area we’ve decided for ourselves we want—or need—to develop.
Among university academics we speak of individual “intellectual commitments” and learn to massage the grade relationship between mentors and students to permit the emergence of them. The more advanced the learning becomes, the more space teachers set aside for indulgence in self-directed learning. With children, we use extrinsic motivators intuitively, almost unconsciously, with hidden strategies that vary wildly from teacher to teacher.
Here’s the paradox: Motivation isn’t a byproduct of learning. It’s a learning outcome in itself. When students leave our classrooms having learned to find motivation in challenge, curiosity, and growth, we’ve taught them something more valuable than any content standard.
Yet in a system that measures commas, formulas, facts, procedures—everything imaginable that our academic learning outcomes demand—we don’t grade for motivation. Somehow it remains this standalone, wouldn’t-it-be-nice kind of mental model. We get serious about measuring it when we are a university researcher doing a study on goal orientation or reinforcement schedules, but we don’t place it in the wild at the center of the classroom where it belongs when we are teaching.
Perhaps we should count ourselves fortunate for this omission. If we graded motivation the way we grade everything else, we would decimate it. If we contracted with ETS to test entire institutions on “motivation” as well as “reading comprehension,” we would be highlighting motivation as performance.
Just as reading texts to perform a recitation of its fact pattern for an external authority does not bode well for producing readers in real life, assessing the exertion of effort for the sake of exerting effort isn’t likely to produce learners who voluntarily learn even when no one is watching.
We would turn students into pawns, performing enthusiasm for points, manufacturing interest for credit. As the song says, “I can’t make you love me if you don’t.” The moment we demand motivation, reward it extrinsically, or punish its absence, we transform something precious and internal into just another compliance game.
When we want to place motivation at the core of any learning activity, we must leave space for students to engage in the work for its own sake. The aesthetic component—the beauty, the challenge, the satisfaction—draws the inner being into the work. But aesthetic response is tough to evoke under cognitive surveillance. In a system regulated by content curricula tied to tight time spans, atomized collections of knowledge that must be covered, and mandatory standardized tests, finding ways to leave this space for emerging interest and curiosity is a challenge.
The easiest defense against self-incrimination is to insist that learning is not and should not be construed as an aesthetic act. Learning, in this view, is about mastery of discrete skills and accumulation of verifiable knowledge. It’s measurable, sequential, and standardized. Beauty, satisfaction, personal meaning—these are nice bonuses if they happen to occur, but they’re not the point.
The point is coverage, proficiency, college readiness. It’s absurd to look for joy in work. We do children no favors, so goes the argument, to teach children a distortion of what they will face on the job in the real world. The signature on the paycheck stands for an employer who has little interest in whether you find joy in your work.
But watch a child lost in a book they chose themselves. Watch a student suddenly see the pattern in the math, not because they have to, but because the pattern is elegant and surprising. Watch a writer revise a sentence seven times because they can hear it’s almost right but not quite. This is why we don’t want our writers routinely to turn over their writing of sentences to bots. The joy arising from insights emerging during writing makes us writers. But even using the bot doesn’t matter so much if the motivation arises from struggling with a sentence that sounds wrong, doing something to fix it, and then recognizing when it works.
These moments aren’t departures from learning—they are learning at its most powerful. The aesthetic pull to notice how beautiful mathematics is, the intrinsic drive to get something right for its own sake, transforms compliance into intellectual engagement.
Yet our system treats these moments as luxuries we can’t afford. There’s too much to cover, too little time, too much accountability riding on test scores. So we ask: “What’s the learning target?” “How will we measure it?” “What’s the data saying?” Rarely do we ask: “Will students find this beautiful? Will it pull them in? Will they want to keep going after the bell rings?”
The question “What space is there in this activity to genuinely invite engagement?” isn’t a typical criterion in lesson design. Even more rare is the built-in assessment of motivation itself. We design lessons to cover content and measure mastery, but rarely ask: Are students learning to want to work hard to improve their ability to learn more of this sort of content?
Peer Feedback in the Writing Classroom: A Case Study in Motivation
Let’s focus on a common instructional practice: peer feedback in the high school writing process. In mainstream writing instruction, peer feedback is deeply embedded in the instructional culture—so much so that it may actually seem impossible to teach writing without it. The typical approach pairs the writing prompt with a rubric: students draft their work, then use the rubric to evaluate each other’s papers before revision or, more aptly, “fixing.” This prompt-rubric technique has become the backbone of writing instruction in many high school classrooms and in many universities.
Ironically, this is also the instructional model that has drawn intense scrutiny as AI tools have emerged. When writing instruction centers on meeting rubric criteria, it’s no surprise that AI can replicate student performances with unsettling accuracy. That’s what AI does. The system has inadvertently trained students—and now machines—to produce writing that checks boxes rather than communicates ideas.
I believe—no, I know—that rubrics serve important purposes and have valid uses which nothing else can accomplish. They can help students understand complex criteria, provide a shared vocabulary for discussing effectiveness, and make invisible standards visible. Writing a policy brief, for example, requires understanding what a policy brief is, what one looks like. But a rubric for writing a poem?
This explains why automated essay scoring has become so attractive in writing classrooms. If we conceive of the writing classroom as a space for learning to write predetermined forms for external reasons and purposes—irrespective of the writer’s personal intentions and motives—then automated scoring makes perfect sense. Why wouldn’t we use AI to evaluate whether students have produced the five-paragraph essay structure, included sufficient evidence, maintained proper grammar, and hit the required word count? The machine can do this faster and more consistently than any human reader.
But the moment we understand writing as an act of thinking, of discovering what we mean through the struggle to say it clearly, of developing a voice that matters to the writer and is heard by the reader—the moment we center motivation and personal investment—automated scoring is obviously incompatible with the goals of writing instruction. The tool works only if we’ve already reduced writing to a performance of form rather than an act of meaning-making.
Using rubrics as a ubiquitous tool to standardize generic writing performances and codify criteria for grading purposes is not a valid use, though it may be reliable. When rubrics become scoring instruments with high stakes (grades) attached and reliability is paramount, rather than learning tools, when validity is the prize, they shift the entire activity from growth to measurement, from exploration to compliance.
Which brings us back to motivation. When we ask students to engage in peer feedback using a rubric, what are we really asking them to do? Are we inviting them into genuine dialogue about writing—about what works, what confuses, what moves the reader? Or are we asking them to audit each other’s papers against a checklist?
More importantly, how do students experience this activity? Do they find it valuable? Does giving feedback help them see their own writing differently? Does receiving feedback motivate them to revise, or does it feel like one more hoop to jump through before submitting for a grade?
These aren’t rhetorical questions. They’re empirical ones teachers might have an interest in examining. And if motivation is truly a learning outcome we care about, we need to assess it directly.
Assessing What Students Actually Experience
So I created a survey tool to provide an example, not a ready-made thing to do Monday morning. Using an AI-powered app I built and make available to you in this post, I generated questions specifically designed to assess how high school students experience peer feedback in their writing process.
I haven’t field-tested the assessment tool, and I would want to refine the language of the response options based on the situation in the classroom, but for my purpose here I have an example assessment tool I could use repeatedly over time to track change. I’m using the term “assessment” in its root sense—finding out what the reality is, not evaluating or appraising a performance or an object. Here’s what emerged:
Student Survey: Peer Feedback Experience
1. When I give feedback to my classmates on their writing, I feel:
Confident that I am being useful
Somewhat useful, but uncertain
Nervous because I’m not so good at it
Uncomfortable or unsure what to say
Like I’m not helping anybody much
2. When I receive feedback from my classmates on my writing, I feel:
Grateful for the help
Somewhat appreciative
Unsure of what to do with it
Defensive or discouraged
Like the feedback isn’t helpful
3. The peer feedback process helps me learn to write better:
Strongly agree
Agree
Neutral
Disagree
Strongly disagree
4. What makes giving and getting peer feedback most challenging for me?
Nothing—I find it valuable
I worry about hurting feelings as my feelings are sometimes hurt
I don’t know what feedback to give
Peers don’t take it seriously
I don’t trust my classmates’ opinions
5. I put effort into giving thoughtful feedback because:
I genuinely want to help my classmates improve
I don’t want to get a reputation for being lazy or selfish
It’s required for a participation grade
I hope they’ll give me good feedback in return
I don’t put much effort into it
6. After receiving peer feedback, I am:
Excited to revise my work
Weighed down by concerns about my grade
Unsure what to do with it
Frustrated or confused
Unlikely to use it
What This Survey Reveals
Some teachers might object: “I already know my students well enough. I don’t need a survey to tell me how they’re experiencing peer feedback.” It’s true that teachers spend long periods with their students over many weeks. Some teachers DO assess motivation informally through observation, journal prompts, etc. They DO gather information that helps them make sense of the learning ecology and adjust for individual issues and concerns.
The strongest teachers are likely those with the best intuitions about how to respond in the moment, and that intuition is grounded in qualitative data much more useful than any survey. Good teachers develop irreplaceable intuitions about what’s working and what isn’t. But even good teachers can benefit from periodic, low-stakes quantitative data collection over time not for instructor self-evaluation, but for evaluation of instructional routines and practices.
We know this because good teachers do this assessment work routinely already—when it involves course content. It’s the soft outcomes that don’t get such careful attention. Motivation determines whether our carefully designed instructional activities actually produce learning or just produce compliance. The interrelations among learning, compliance, and performance are among the most complex in pedagogy. We need every assist we can get. If peer feedback activity is supposed to be among our most powerful tools for developing writers, shouldn’t we actually assess whether it’s building motivation or undermining it?
Like motivation itself, this kind of assessment works best when it’s ongoing rather than a one-time snapshot or event. The goal isn’t to freeze students’ feelings at a moment in time, but to use these numbers collectively as leading indicators, to catch a glimpse and track how their feelings evolve as instruction is adjusted.
Notice what these questions assess that traditional writing instruction typically ignores:
The emotional dimension. Questions 1 and 2 separate the experience of giving feedback from receiving it—a crucial distinction. A student might feel confident offering suggestions but defensive when receiving them. Or they might value feedback from others while doubting their own ability to help peers. These are different motivational challenges requiring different instructional responses.
The purpose behind effort. Question 5 doesn’t ask whether students put in effort—it asks why. Are they motivated by genuine desire to help (intrinsic), by obligation to peers (social), by grade requirements (extrinsic), or by transactional reciprocity (strategic)? Each answer tells us something different about what’s driving engagement.
Barriers to engagement. Question 4 identifies specific obstacles: fear of social consequences, lack of skills, peer dynamics, trust issues. These aren’t problems we can address by refining our rubric or providing better prompts. They’re motivational and relational challenges that require different interventions entirely.
Behavioral intent. Question 6 measures what matters most: Does the feedback actually motivate revision? If 60% of students select “Unsure what to do with it” or “Unlikely to use it,” we’ve learned something crucial about the effectiveness of our peer feedback process—something that would never show up in our assessment of their final papers.
The Data We’re Not Collecting
Imagine discovering that most of your students feel “uncomfortable or unsure what to say” when giving feedback. That’s actionable intelligence. Or imagine learning that students who receive feedback feel “defensive or discouraged.” Now you know you need to teach students how to receive feedback, not just give it. You might model how to read feedback generously, how to distinguish helpful critique from unhelpful criticism, how to use confusion as information rather than take it as judgment.
When we implement peer feedback sessions, collect the feedback sheets to verify completion, and move on to grading the final drafts, we leave a lot to chance. As near as I can tell from research I’ve read, we don’t routinely ask students how they experienced the process. We don’t systematically assess whether it motivated them to improve. We rarely gather data on what’s working and what’s creating barriers for them.
Creating Your Own Motivation Surveys
Want to create your own survey? In my ongoing effort to learn more deeply how to use AI, I built an AI-powered app to demonstrate what’s possible: https://codepen.io/Terry-Underwood/pen/ogbGBRq
It’s free to use, requires no account, and runs entirely in your browser. Use it to generate questions based on what you want to understand about your students, then copy them into Google Forms or your preferred survey platform. It’s also possible to link such survey tools to data collection and analysis components, though that requires more technical infrastructure.
The technology is far less important than the intention to collect objective data that can inform conversations with students. You can do it with paper and pencil and scantron sheets. This is a demonstration tool to earn hearts from you just because I made it (Yay!)—not a polished product—but it shows that if I can do this at my age and level of technical expertise, any teacher can harness AI to create motivation surveys for short-term teacher research projects and long-term instructional reflection. My bet is a lot of you reading this already know more about these tools than I will ever. know.
Why This Matters
We can’t make students like academic learning, but we can work to make it harder to dislike. We can ask the questions that matter with specifics that matter right now, today, in this classroom. We can gather data on the emotional and motivational dimensions of learning with the same care we bring to assessing content mastery.
Motivation isn’t a nice-to-have. It’s the engine of deep learning. And like any learning outcome that matters, it deserves to be assessed, measured, tracked, and cultivated with intention. The data is there for the taking informally, qualitatively, quantitatively—if we’re willing to give it a hearing.

"When students leave our classrooms having learned to find motivation in challenge, curiosity, and growth, we’ve taught them something more valuable than any content standard." Hear, hear!