Yesterday, Gardner Campbell, Tom Woodward and I had a tremendous opportunity to present our vision for online learning at VCU to the VCU Board of Visitors.  We had 1/2 hour (during their lunch!) to deliver a playful and informative learning experience for a Board that was apparently eager to hear where we are headed. We said some things, gave away some t-shirts, and, ultimately, I believe, did what was asked of us reasonably well.

We left the last 10-15 minutes for Q&A, and a few of the questions were of the sort I fully expected. One question, though, has me ruminating a bit. I’m not too distressed about how I answered the question, but it was a tough question to answer parsimoniously. The question was something to the effect of, “How do you know that online students are actually the ones taking the tests?” It was the cheating question.

I started the response by telling the Board about three of the most common approaches to “the cheating problem,” specifically with respect to exams: the remote live or web proctoring route (via technology such as SoftwareSecure), the employment of remote testing centers, and the use of  live proctors (obtained by the student and vetted by the program/department). I went on to say that one of our goals was to help faculty members think differently about assessment such that cheating and/or plagiarism were effectively eliminated as concerns. The Board member who asked the question followed up, inquisitively, with, “Like what?” I mumbled through a response that used words like “authentic” and “groups” and “projects” and “creating” and, and, and… It wasn’t my best moment, but I think I got the point across.

In hindsight, and if I had more time, I would have liked to reference a blog post that Stephen Downes wrote in 2012 about New Forms of Assessment.

In the schools… there is no reward for helping others (indeed, it is heavily penalized). Suppose educational achievement was measured at least partially according to how much (and how well) you helped others. The value of the achievement would increase if the person is a stranger (and conversely, decrease to zero if it’s just a small clique helping each other) and would be in proportion to the timeliness and utility of the assistance (both of which can be measured)… Suppose… students were rewarded for cooperation. Not collaboration; this is just the school-level emulation of the creation of cliques and corporations. Cooperation, which is a common and ad hoc creation of interactions and exchanges for mutual value.  

How might we operationalize this notion of cooperation within a learning community?

In a recent conversation with colleagues about  how social network analysis (SNA) might be applied to document these sorts of things. SNA is an approach to mapping and measuring relationships within a group of related entities. The entities could be people, organizations, ideas, etc. (For a simple explanation of the basics of SNA, I highly recommend the introduction on orgnet.com). If we think of students as nodes on a network, we can create visual and mathematical representations of their contributions to a class, program, learning community, etc. So, for example, where Twitter is integrated into a course or program, Martin Hawksey’s TAGS Explorer program can be used to visualize the network of students on Twitter.

2014-04-08 22_52_39-TAGSExplorer_ Interactive archive of twitter conversations from a Google Spreads

The archive of tweets from TAGS Explorer could then be imported into NodeXL or any SNA program to generate the important SNA metrics such as betweenness centrality, which is basically a measure of how useful or helpful a node is within a network. In assessment terms, betweenness centrality would be a measure of the degree to which a student served as an important player within the Twitter network. Students who connect other students to each other and point out connections between others on Twitter would have a higher betweenness centrality score.

There are other, simpler ways to think about how to “measure” cooperation or contributions to a learning community. For example, where students are asked to share web-based resources (articles, blog posts, etc.) via a social bookmarking group, we can see how often students do that. Diigo groups provides those simple analytics. The screenshot below is from a Diigo group that some colleagues and I use fairly casually. You will note that Diigo tracks the number of bookmarks a group member adds as well as the number of different topics that member creates.

2014-04-08 23_01_04-Group Settings - General - VCU_OOE _ Diigo - Groups

In a math course, instead of simply having students provide responses to problem sets, what if students were rewarded for the degree to which they help other students solve problems? A platform like Piazza is perfectly suited for this sort of approach. In the screenshot below, you see a question, a student’s response and the instructor’s response. This could easily be a situation where students help each other work through a problem with minimal intervention by the instructor.

2014-04-08 23_04_35-MATH 1B (316 unread)

These are just some of the ways that we might begin to operationalize Downes’ notion of rewarding cooperation. Surely there are other ways to operationalize the idea. For me, though, I think it’s important that faculty members reflect on the idea itself. That is, what does the learning experience look like when students know up front explicitly that they will be assessed based on the degree to which they cooperate and help the other members of their learning community? That’s not a form of assessment that can be easily gamed or that lends itself to cheating.