Transcript

Brock: How do I assess digital assignments? Reider: Yeah, so that's a great question. VanKooten: Yeah so I've done a lot of thinking and a lot of work about assessment of digital texts in the classroom. Lee: Someone once said this to me and I've used it now multiple times: "assessing digital texts is a beast." Hodgson: Assessment has always been one of my stickier issues. Losh: The questions about assessment in digital rhetorics are perilous. Lee: And I think that just stems from our... we have a lot of anxiety around assessing kind of standard alphabetic texts and so then you're throwing in additional modes of communication and it's not just that there are more modes, it's that these modes collaborate and that they mix, so that complicates it. Brooke: I think that often—often what drives digital rhetoric assignments is novelty and defamiliarization and those are exactly the things that mitigate against having reliable ways to assess that work. Lee: And then that complication is exacerbated by the fact that, I think when you assign digital projects, you get a real diversity in terms of what ends up getting turned in Hodgson: I think it's highly problematic to design a really good assignment that has one set of assessment criteria... If you tell students exactly what to do in an assignment and they do it exactly the way you want, the assessment is already built in. I mean, You've done it or you haven't done it, right? Lee: So how is a podcast comparable to a video, which is comparable to a Prezi, which is comparable to a blog post, how can you actually assess these things in ways that are commensurate? McElroy: McElroy: So when I assess digital rhetoric I do, what I think many people do, which is have the students create the product, go through the process to make this text, or to engage in digital rhetoric in some manner, and then also have the reflective piece. Beck: Asking them to reflect upon that kind of—the work that they did. Holmes: Why did you try to do what you did? What were your working assumptions about the audience? Just kinda talking through those issues a lot. Beck: And then really just you know, say you got an A on it, because it's not really so much about the grade as what they've learned through that process Holmes: For me, I'm much more interested in students being able to talk about the process and develop some self-consciousness about the choices that they made than I am in evaluating a finished form per se. VanKooten: So it's not that their product doesn't matter... but often in the product isn't where the learning is evident, it's not where it resides. Holmes: But mostly kind of looking at ability to narrate and think critically about the process of invention, self-consciousness about the medium in relationship to the audience and maybe why they made those choices in relationship to an audience. McElroy: So that what you're assessing is not just the product, and I've heard of some folks not even assessing the product, but assessing the student's understanding of the process of creating that and the fittingness of the product they've created. Warfel: You don't actually grade the digital project itself, you grade the reflection. That's useful, especially if it's an early assignment in the semester and you're not teaching the skills necessary to do the digital project, you just want students to take a risk and try. I think that's one way of doing assessment, is to grade the reflection. Rivers: So the way that I've resolved the issue of how do you grade digital products is removing the grading from the product itself and essentially on to the work that they do. So they're basically... at the end, they essentially make a case for here's the work that I did, and that's what gets graded... I'm not actually grading the product, I'm grading your weekly reflections, which is I think a really good way of handling digital rhetoric which tends I think to be more diverse in terms of the range of projects that you get. And I think generally just good writing pedagogy where you separate grading from feedback on writing... you build in a kind of safety net by grading the process as opposed to the product. VanKooten: Sometimes reflections can be not the most authentic genre, but I feel like doing lots of different kinds of reflections—if they're talking to a classmate in class, or if they're writing, and then they're writing a more formal reflection essay along with the digital product sort of I can look at all of those things and triangulate my assessment approach. Arola: I love the justification statement. I love having students explain why they did what they did. It doesn't need to be super formal, sometimes it takes the shape of a presentation—you know, you're going to present your informational campaign—and then part of that presentation you need to tell me why did you make that logo, why did you choose those colors, why in that video did you choose that sound clip, what were your choices and why did you make them? And their ability to articulate that for me really indicates a discursive consciousness with indicates learning to me in my courses, and I make that clear to them. Eyman: I think it's incumbent upon the student to be able to explain the success or the non-success, whatever the issues were, with the production, and so I think the assessment lies primarily in the rhetorician, the rhetor, the maker of the rhetorical objects... I want them to tell me how I should assess this because it means they have an understanding of what they've done in a way that is really more kind of critically engaged than if I gave them a rubric or if I asked them, you know, to just write a reflection. Wargo: I'm not a rubric-er or in terms of thinking about that, but when a rubric is used as a pedagogical tool to sort of think about either mode or resource, I think that that's important. Rickert: You can start developing rubrics with the class about why they made the choices that they did, why they're rhetorical, how the technology enables in various ways, and if it enables, what's the best way to pursue it through that technological platform or medium Losh: When it comes to things, like practical things like making a rubric, I have a rubric that has four general areas, so it's conceptual, rhetorical, stylistic, and technical. And I tell students, though, that I'm going to evaluate them holistically, so in that case, a student who has a really interesting concept, but is only able to execute, say, a paper prototype, that student can still get a good grade, because I think it's really important not to be too enamored with a particular kind of polished artifact. Aguayo: I think because I come from a media department, I am constantly evaluating creative work all the time, and so there's obviously the conceptual and technical criteria. You know, technical—did you accomplish whatever technical goals I set out for you, and conceptual, like, are you getting the assignment, are you understanding it, are you articulating it, are you—you know. But I also think storytelling is probably where you have to sit. I think with the criteria of—did they accomplish not just what the assignment was, but they were able to articulate a story, you know? And there's lots of criteria matrix you can use for that, but so for me, it's less sticky. Lee: So, what I try to do is, as a class, we come together and we say here's the project prompt, and admittedly, it's a very loose and vague prompt, it kinds of gives them these wide parameters and contours within which to work, and so given this, and given the kind of content we've been discussing in class and you've been reading outside of class, what seem like appropriate criteria that are also loose enough, that are capacious enough, that they will be applicable to the range and breadth of texts that they're going to produce Brown: I go about assessing digital rhetoric essentially the same way I go about assessing any other work. Eyman: If the goal is to make stuff, right, so if you're making making a digital text or digital practice or digital performance, I think the assessment has to come through the kind of rhetorical analysis of that particular thing, object, performance, whatever that has been created. So you go back to, like, the mechanisms that we would use to analyze, flip back on to what we've used to produce and then you have to see where those map out. Hodgson: A lot of assignments I typically give are fairly open-ended... and then in terms of assessment what I do is I have students spend the time on their own to identify well, what makes a good kind of this thing that I want to make. So they do the research to identify the various genres they're participating in, the communities they're looking at, and then thinking about what qualifies as a good work. Arroyo: So that's what I do. I bring up skills, bring up rhetorical strategies, go over them and analyze, you know, and then I hold them accountable for those things, so its not like all the sudden you have to be a filmmaker, you know? But, I want them to be able to pick apart a video, too, and like how was that produced, you know, like even the simple placement of text, how long does it stay up on the screen, was that effective, you know? Lee: It's often very rhetorically conscious. So, what's your rhetorical situation, how do you attend to audience, so what's your audience awareness, what sort of style do you use, the arrangement, of course you can think about style in multiple ways, arrangement in multiple ways, so in some ways it's kind of rooted in the canons, but thinking of those canons not in the sort of linear prescriptive classical sense, but in kind of maybe more of how Collin Brooke thinks about it or how Yancey asks us to think in her chair's address, so thinking about them in dialogue or what you get when you pair them, rather than thinking about them in isolation. Yancey: My own sense is that digital rhetoric is sufficiently new that it behooves us all to use every single one of these opportunities as an opportunity to learn. Brooke: First of all I invite them to kind of self assess, and I ask each of them to sort of take stock of where they're at, then take stock again at the end, and demonstrate to me that they've pushed their abilities... That's hard to assess by myself. And so assessment becomes less me and more we. Yancey: So what did you learn? What did we learn? What have we all learned that can contribute to this larger enterprise? That's a through-line crossing all of them. I expect all of us to learn and all of us to contribute.
Click here to resume autoscroll
Click here to resume autoscroll

Question Four:
How do you assess digital rhetoric in the classroom?

Assessing digital rhetoric is a complex pedagogical practice, one interviewees describe as “sticky” and “problematic” (Hodgson), “perilous” (Losh), and even “a beast” (Lee). One of the reasons assessing digital rhetoric is so challenging, as Collin Brooke indicates, is that these texts are often marked by their “novelty,” which “mitigate[s] against reliable ways to assess [the] work.” Moreover, and as Rory Lee notes, the concern regarding novelty becomes further complicated by the breadth and diversity of digital texts students tend to create and submit. “How is a podcast comparable to a video,” for instance, or how is a Prezi “comparable to a blog post?” In other words, as Lee puts it, “How can [teachers] actually assess these [projects] in ways that are commensurate?”

The challenge of assessing digital rhetoric presented here, then, seems to be one of rupture: these texts are not like print texts, and thus assessment of them should be different as well. However, the general approaches that emerge below take up this challenge by orienting themselves to continuation—that is, by drawing on best practices in assessment from the fields of rhetoric and writing studies: scaffolding and sequencing, composing process, clear evaluative criteria, and evidence of learning.1

The Para-Texts of Assessment: Pointing to Learning

Despite the difficulty of evaluating digital rhetoric, the interviewees did identify and share specific assessments. One common approach was to have students create reflections on and about their digital texts. Here, students work to answer, per Steve Holmes, questions such as “why did you try to do what you did?” In taking this route, teachers turn the focus toward process rather than product; as Holmes says, “I’m much more interested in students being able to talk about the process and develop some self consciousness about the choices that they’ve made than I am about evaluating a finished form, per se.” In having students reflect on their process, teachers are asking them to identify indicators of learning by clarifying what they did, explaining how it was rhetorical and intentional, and describing what they learned: “It’s not really so much about the grade; it’s about what they learned through that process,” says Estee Beck.

As if to counter objections to grading students’ composing processes, Crystal VanKooten states, “It’s not that their product doesn’t matter;” rather, the product “is not where [learning] resides.” Stephen McElroy reiterates this point: “What you’re assessing is not just the product […] but assessing the student’s understanding of the process of creating that [product] and the fittingness of the product that they created.” While these set of responses still value the digital texts students create, other responses, as McElroy foreshadows, showed minimal evaluative concern for the product and focus almost entirely on students’ ability to reflect on the process and demonstrate what they learned as a result. As Jennifer Warfel Juszkiewicz states, “You don’t actually grade the digital project itself; you grade the reflection.” This can be an effective means of evaluation, especially toward the beginning of the semester, because it obviates the immediate need for students to have the technical skills necessary to create digital projects and it helps “students to take a risk and try.” Nathaniel Rivers offers a similar approach: The way that I’ve resolved the issue of how do you grade digital projects is removing the grading from the product itself and essentially on to the work that they do. So they’re basically, at the end, making a case for the work that they did. And that’s what gets graded.

While assessing reflections seems to level the technological playing field and provide a way out of the “sticky” situation of grading novel projects that are diverse and use different modes and media, it also signals a central irony in attempts to assess digital rhetorical work: that is, teachers fall back to relying on words. Devoting all of one’s evaluative attention toward the process vis-à-vis reflection does raise other concerns, too. As VanKooten admits, “Sometimes, reflections can be not always the most authentic genre.” To address this, VanKooten suggests “having students doing lots of different kinds of reflection.” For example, they can talk to classmates, they can write in class, and they can compose “a more formal reflection essay.” Another way in which students can participate in reflection, as Kristin Arola says, is in “the shape of a presentation.” In having students reflect in multiple ways, we can, as Arola continues, discern “a discursive consciousness, which indicates learning.” Moreover, if students are aware from the onset that they need to practice reflection in one or more forms in order to explain both what they did (process) and what they gleaned from doing so (indicators of learning), they are more likely to think consciously and critically about their text throughout the process of production. Said otherwise, this particular form of assessment also attempts to instill and encourage a rhetorical mindset in creating digital rhetoric. A different form of reflection, one Doug Eyman offers, is to ask students to determine how they should be assessed: I think it’s incumbent upon the student to be able to explain the success or non-success, where the issues were with the production. So I think the assessment lies primarily in the rhetorician, in the rhetor, in the maker of the rhetorical object. I want them to tell me how I should assess this because it means they have an understanding of what they’ve done in a way that’s really more critically engaged than if I give them a rubric.

Although Eyman considers the rubric to be less appropriate for evaluating digital rhetoric, others argued for the effectiveness of rubrics. For instance, Jon Wargo finds value in using rubrics “as a pedagogical tool to think about either mode or resource.” In this model of assessment, students can benefit from thinking of a project in terms of its component parts, which rubrics, in isolating important characteristics in a given text and rhetorical task, ask students to do. In discussing rubrics, interviewees provided not only potential criteria but also additional strategies for developing criteria. Liz Losh shares the four criteria she tends to use when assessing digital rhetoric: “conceptual, rhetorical, stylistic, and technical.” Losh does clarify, however, that she nonetheless evaluates students holistically, so that a student who conceptualizes well but lacks technical ability to execute the technical can still succeed. The logic here, as Losh explains, is “to not get too enamored with a particular kind of polished artifact.” As such, and similar to those who rely on reflections for assessment, those who use rubrics might consider more than just the finished product in evaluating digital rhetoric.

Angela Aguayo’s set of criteria overlap with Losh’s in that she, too, uses “conceptual and technical” as two criteria. However, Aguayo also includes a third criterion: storytelling. In talking about this criterion, Aguayo frames it through a question: “Did they accomplish not just what the assignment was but articulate a story?” The difference between these two rubrics reflects a difference in the types of digital rhetoric each teacher is asking students to create. Losh’s assignments ask students to create digital arguments, hence the attention allotted to rhetorical and stylistic dimensions; Aguayo’s assignments ask students to create digital stories through video.

Another way to develop assessment criteria is to do so collaboratively with the students. As Lee says: As a class, we come together, and we say, “here’s the project prompt.” […] So given this, and given the kind of content we’ve been discussing in class and you’ve been reading outside of class, what seem like appropriate criteria that are also loose enough, that are capacious enough, that they will be applicable to the range and breadth of texts that they are going to produce? This process, similar to Eyman’s proposed method of assessment, makes students responsible for their own learning by asking them to think about appropriate assessment criteria in the context of a given project. Such a process also works to make transparent, and therein help demystify, the assessment process. Moreover, this approach is inclusive of many voices, and it requires that students work collaboratively and dialectically to determine a set of criteria suitable for all participating rhetors. That said, getting students to compromise and arrive at a consensus can be difficult and time-consuming, and in guiding this collaborative process, teachers need to be cognizant of how the “louder” voices can dominate the conversation and exert control over which criteria are included and how they’re understood. In addition, some students don’t feel adequately prepared and qualified to make decisions on assessment, preferring instead that the expert—the teacher—make such decisions for them.

A third form of assessment that emerged through interviewees’ responses was to evaluate digital rhetoric by conducting a rhetorical analysis. For Eyman, “the assessment has to come through the rhetorical analysis of that particular thing, object, performance, whatever that has been created.” James Brown agrees, saying, “I go about assessing digital rhetoric essentially the same way I go about assessing any other work," that is, rhetorically, which is implicitly an argument for continuation of rhetorical theory in digital rhetoric pedagogy. In taking this approach, interviewees also wanted to ensure that students understand what it means to rhetorically analyze digital rhetoric: such an understanding makes them critically aware of how they’re being assessed, and it helps them think rhetorically, which they can leverage in not only the analysis of digital texts but also, and more importantly in this context, the creation of their own digital texts.

To help students foster an operable understanding of a rhetorical analysis, interviewees said they devoted in-class time to modeling rhetorical analyses for their students, who then practice such analyses on their own. Sarah Arroyo outlines the process thusly, “I bring up rhetorical strategies, go over them, and analyze, and then I hold them accountable for those things.” Justin Hodgson similarly speaks to how students practice rhetorical analyses in ways that inform the production of their own digital texts and the subsequent assessment of them: Students spend the time on their own to identify, “well, what makes a good kind of this thing I want to make?” So they do the research to identify the various genres they’re participating in, the communities they’re looking at, and then thinking about what qualifies as a good work.

Orienting Ourselves to Learning

As is evidenced in the above discussion of the genres of digital rhetoric assessment, a particular method of assessment—reflection, rubrics, rhetorical analysis—results in a particular pedagogical sequence: work with students in class towards creation of the genre; have students practice the genre, ideally with texts similar to ones they will create; and then task students with producing the genre that will be assessed. This scaffolded approach not only makes students privy to the assessment process but also asks students to employ the assessment themselves, a move rooted in the longstanding commonplace that practice with and in a meta-genre begets effective composing.

Regardless of how we assess digital rhetoric, we should attempt as much as possible to learn from our evaluative practices. In addition, we need to foreground assessment in ways that assist student learning and that foster an awareness of that learning. As Kathleen Yancey says, “digital rhetoric is sufficiently new that it behooves us all to use every single one of these opportunities as an opportunity to learn.” Phrased as a set of questions, for both teachers and students, Yancey offers these: “What did you learn, what did we learn, what have we all learned that can contribute to this larger enterprise?” In using assessment as gateway toward learning, teachers should also remember that the forms of assessment detailed above do not need to be practiced in isolation. For instance, teachers can use rubrics and have students create reflections. Furthermore, within their reflections, students can rhetorically analyze their own digital texts. Rubrics can also contain criteria that teachers can use to rhetorically analyze student work, and students can use those rubrics in thinking about and evaluating their own work in their reflections. Thus, in looking across the responses to assessment, we see an attempt to make the evaluative process transparent and reflective, to involve students in the process, and to make the process a rhetorical exercise, one explicitly keyed to learning.


1 In this way, the approaches here also embody the broad points of argumentative consensus and dissensus about assessing writing more broadly, though those particulars are outside of the scope of this piece.