Heidi A. McKee and Dànielle Nicole DeVoss handled this expansive topic by dividing the fourteen selected essays into four larger, overarching categories: Equity and Assessment, Classroom Evaluation and Assessment, Multimodal Evaluation and Assessment, and Program Revisioning and Program Assessment. They stated, "first, we emphasize the assessment and evaluation of digital writing—the pedagogical, methodological, technological, and ethical approaches for and issues involved with assessing and evaluating multimodal, networked texts and the student learning they represent" (McKee & DeVoss, 2013). After staging the importance of digital assessment and general considerations practitioners must take when enacting the assessment, the following chapters pertain to "the use of digital technologies to change how writing (both digital and traditional) and writing instruction in large-scale programs is delivered and assessed" (McKee & DeVoss, 2013). As previewed, this digital book offers a multitude of opinions and depth of knowledge regarding how to use assessment and evaluation in a digital context, in a way that proves to be incredibly worthwhile and valuable.
Mya Poe wrote "Making Digital Writing Assessment Fair for Diverse Writers." She asked the reader to realize that because this assessment deals with digital work, such as online texts or the creation of a digital ePortfolio, one ought to be using assessment practices that echo students' technology practices. Poe also noted that multiple types of assessment are needed for the plurality of student identities—students come from different backgrounds both demographically as well as their level of access to technology. She wrote, "in the end, research tells us that student relationships to technology are complex, thus we are likely in need of multiple measures to understand student digital identities and how those identities inform their interactions with digital writing assessment" (Poe, 2013). This importance of understanding multitudes in students is amplified by Angela Crow in "Managing Datacloud Decisions and 'Big Data': Understanding Privacy Choices in Terms of Surveillant Assemblages." Crow echoed Poe's point while pausing over the importance of the reality, weight, and risk of assessing digitally, for example, using Google's available tools for teachers are helpful for assessing work but they pull other student data outside of the necessary tools, thus potentially providing a security concern. She discussed this point within the framework of her question, "do we trade data for security?" (Crow, 2013). By setting the guidelines, including how assessment should function, both philosophically and ethically, this topic provides a complete lens to begin the actual, tactical work of evaluation.
This next section dives into the question of what constitutes a good project in the digital realm of work. In the past, evaluating works for grammar, the rhetorical situation, and other components of student work was straightforward. With these new digital dimensions, there are many other factors to consider when assessing work in the traditional or digital classroom, like reasoning beyond the mode chosen for a digital project. Charles Moran and Anne Herrington set up this question of good work in their essay "Seeking Guidance of Assessing Digital Compositions/Composing." They pulled assessment elements of digital work from the National Writing Project to establish a framework for evaluation, which included the following categories: context, artifact, substance, process management and technical skills, and habits of mind (Moran & Herrington, 2013). These five domains guide the new aspects of digital work that need to be evaluated from a nuanced lens—for example, habits of mind refers to one's risk taking and creativity in project creation. These criteria are similar to those Cheryl Ball (2012) noted in "Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach." When evaluating webtexts, Ball listed the following as evaluative categories: "creativity, conceptual core, research/credibility, form/content, audience, timeliness" (pp. 67–68). Ball's categories overlapped with the guidelines Moran and Herrington suggested.
To further this conversation, Colleen A. Reilly and Anthony T. Atkins provided "Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects." Much like the title suggests, Reilly and Atkins discussed the discomfort students may experience in multimedia work, such as using voice recordings and creating videos, and the value in students pushing these boundaries, for the sake of their learning through discomfort. One key concept they identified is the following: "deliberate practice—one manner in which the acquisition of expertise has been theorized—overtly requires a process that includes trial and error, the experience of which leads to expanding proficiencies and developing expertise" (Reilly & Atkins, 2013). The reasoning behind the concept is that a student of digital rhetoric will develop their multimodal skills only if they push themselves and work in unfamiliar constructs. An example of such is illustrated in "Stirred, Not Shaken: An Assessment Remixology" from Susan H. Delagrange, Ben McCorkle, and Catherine C. Braun. In a class taught by McCorkle, students must remix genres and YouTube videos to learn about fair use and the importance of remediation. This classroom exercise likely pushes them beyond skills they bring to the class and forces them to try new-to-them genres, like blending horror music with Disney films and learning to use video editing software (Delagrange et al., 2013).
Risk and discomfort are key components of working with multimodal projects and aid in student learning. To illustate, this review offered me the opportunity to enact this practice from the authors, as this webtext is my first project in coding. By writing this review, I was able to follow and apply the philosophy of this text by working into a new mode of technology and pushing my personal boundaries.
The writers in the category "Multimodal Evaluation and Assessment" suggested a variety of ways to engage with the digital projects for teaching and for sustained use. One of the recurring suggestions is the use of an ePortfolio. In "Composing, Networks, and Electronic Portfolios: Notes toward a Theory of Assessing ePortfolios," Kathleen Blake Yancey, Stephen J. McElroy, and Elizabeth Powers wrote, "today we are migrating to electronic portfolios, in part to teach; in part to assess; in part, definitely, to evoke and then represent a new set of relationships" (Yancey et al., 2013). These relationships refer to the connections students can make throughout their work when creating a portfolio. In Trent Baston's (2010) "ePortfolios, Finally!" from Campus Technology, he wrote "The realization is dawning across academia that portfolio practices, as an educational process, is rewarding and engaging and fits the times—student owned, stays with student over time, produces additional metrics by which to assess and evaluate students, supports high-impact learning experiences outside of the classroom, helps create a strong resume, develops reflective and integrative thinking, supports life-long learning, and so on." Due to the online nature of the work, a semester's worth of work may be housed in one place—and the connections between the projects and knowledge become more apparent. One way to assess the work—to see connections in learning, links to multimodal projects, and a breadth of work over a course of a semester—is to invoke Crystal VanKooten's suggestion outlined in "Toward a Rhetorically Sensitive Assessment Model for New Media Composition." VanKooten created her own assessment model, which includes the following processes in composition: "participate, produce, revise, publish, reflect" (VanKooten, 2013). These five steps encourage students to push their limits, engage with the audience, and establish their own self-assessment too. Although a full-circle model of learning—from interaction to reflection—is not novel on its own, the use of the model in digital work proves especially helpful because of the easily accessible evidence of the work; it is online, thus easily editable and accessible for reflection.
The last category from McKee and DeVoss offered programmatic suggestions for assessment. Hearing from a variety of experts, one learns of many suggestions for programs, such as the "Writers' Studio" used at Arizona State University or the program MyReviewers at University of South Florida (McKee & DeVoss, 2013). Another way to bring digital assessment into the programmatic level is through the use of ePortfolios aforementioned. In Anne Zanzucchi and Michael Truong's "How Electronic Portfolio Assessment Shapes Faculty Development Practices," the two argued: "Program-based reviews of ePortfolios offer important faculty development opportunities, namely in helping instructors enhance their skills in evaluating student learning and aligning curriculum. Evaluating ePortfolios across the program is an important step toward understanding student learning in stages and sequence, which strengthens course planning as well" (Zanzucchi & Truong, 2013). Thus, ePortfolios help students engage in multimodal work, they offer an easy digital avenue for evaluation, and they provide programs writ large an opportunity to assess and improve at a university.