I finally watched Moneyball last night. I read the Michael Lewis book pretty much the day it hit the stores, but hadn’t seen the movie until last night (#ParentingIsHard).
Watching the movie came at a particularly interesting moment for me. I’ve been following closely the developments about the research out of Purdue University about CourseSignals. In late September, Purdue issued a press release with the following headline: “Purdue software boosts graduation rate 21 percent.” (NOTE: Signals was developed at Purdue and launched full scale there in 2008. It has since been released as a commercial product by Ellucian called CourseSignals). That research has been called into question by the great Mike Caulfield. Also, Alfred Essa, who, interestingly, works for McGraw-Hill, ran some simulations to effectively debunk the methods of the Purdue study. Caulfield recently wrote a nice summary of the “issue” and provided some good links for those who want to investigate this issue more.
I’m particularly interested in these developments as I attended an information session a few months ago where Ellucian did, effectively, a marketing pitch about CourseSignals to VCU stakeholders. I don’t know where we are with respect to buying access to CourseSignals, but I have some real concerns. My trepidation pre-dates the Purdue research and the backlash; I raised my concerns at the information session. Specifically, I asked the Ellucian representative about the degree to which a faculty member had control over what triggers the changes from green to yellow to red lights and what triggers a warning communication to a student. The response was something to the effect of, “Right now, those are triggered by algorithms determined by the system, but modifications to that are on the ‘product map’.” In other words, generally speaking, the CourseSignals algorithms were defining student success. This felt like academic advisement by algorithm, which felt… well… icky.
Earlier this week, there was big news out of Carnegie Mellon where the Simon Initiative was launched.
Technological platforms are creating unprecedented global access to new educational opportunities. But we do not know the extent to which students using these systems are learning. As educators and researchers, we must partner to ensure that these platforms not only deliver information, but also include useful metrics, standards and methods that maximize learning outcomes.
Science. Technology. Metrics. Standards. Learning Outcomes. And, “deliver information?” Is that all these new technological platforms bring to the learning table? If you look at the list of the individuals on the Global Learning Council (GLC) as part of the Simon Initiative, you’ll see folks from edX, Coursera, Microsoft, Kaplan, Gates Foundation, Google, etc. There’s not a single learning scientist on that list. Compare that list to those on the National Research Council’s Committee on Developments in the Science of Learning who wrote How People Learn.
It just seems like there’s a critical mass of folks with lots of capital who continue to think that with the right amount of data (the bigger the data the better?) and the right amount of computing power, we can, at long last, figure out these wicked problems of learning and assessment.
I’m not all opposed to the idea that we might be able to tap into the power of modern computing to better help us understand learning. In fact, some would say that I’m a recovering positivist. However, my views on learning have been influenced by the likes of David Perkins and Jerome Bruner. And, my thinking around assessment is in perpetual beta, but I like Gary Stager’s idea that for each student in a course, the goal should be to impress themselves.
I have strong reservations about both grades and rubrics. I believe that both practices have a prophylactic effect on learning. Doing the best job you can do and sharing your knowledge with others are the paramount goals for this course. I expect excellence…. Therefore, I am trying a new experiment this term. You should evaluate each course artifact you create according to the following “rubric.” The progression denotes a range from the least personal growth to the most.
- I did not participate
- I phoned-it in
- I impressed by colleagues
- I impressed my friends and neighbors
- I impressed my children
- I impressed Gary
- I impressed myself
Really ponder that for a moment. What if we said to every student at every level that your whole goal for your formal learning experience is to impress yourself? Not to impress the teacher; impress yourself. I’ve tried it with some graduate courses I’ve taught and I think it’s fairly empowering and transformative . I also regularly re-visit Stephen Downes’ post on New Forms of Assesment.
Suppose instead students were rewarded for cooperation. Not collaboration; this is just the school-level emulation of the creation of cliques and corporations. Cooperation, which is a common and ad hoc creation of interactions and exchanges for mutual value. Cooperative behaviours include exchanges of goods and services, agreement on open standards and protocols, sharing of resources in common (and open) pools, and similar behaviours.
Imagine receiving academic credit for contributing well-received resources into open source repositories, whether as software, art, photography, or educational resources. Imagine receiving credit for long-lasting additions to Wikipedia or similar online resources (we would have to fix Wikipedia, as it is now run by a gang of thugs known as ‘Wikipedia editors’). We can have wide-ranging and nuanced evaluations of such contributions, not simple grades, but something based on how the content contributed is used and reused across the net (this would have the interesting result that your assessment could continue to go up over time).
A part of Downes’ vision has become reality at the UC San Francisco School of Medicine where students are receiving academic credit for editing medical content on Wikipedia.
I don’t want to setup some false dichotomy here. This isn’t a quantitative vs. qualitative thing. I just think that we can use our advanced levels of computing power to collect LOTS of evidence of learning; data in all forms. As one example of what’s possible, consider social network analysis. From a community of learners, we can use social network analysis to MAP (visualize!) and MEASURE (metrics!) individual contributions in relation to others in a learning network. Here’s one of a growing number of examples of what that looks like.
Seven years ago, after reading Moneyball, I wrote about the book and the principles within it with respect to the so-called “data-driven decision making” “movement” in K-12 education. More recently, I wrote about “big data” and the merits of quantitative approaches to political analysis. In both of those posts, I suggest that our increasing ability and capacity to capture and analyze quantitative data doesn’t necessarily offset our ability to make predictions and decisions based on human observations of our world. It’s not a zero-sum game.
1 thought on “Learning, Analytics, Assessment, Big Data, etc.”