We’ve just passed the mid-term point, and the rush of in-class IL sessions has subsided. I experimented this term with a concept I read about in a College & Research Library News article in 2012 called, “Google Spreadsheets and real-time assessment: Instant feedback for library instruction,” where the author, Shannon R. Simpson, reported on her technique of using Google forms to shape her classes while they were actually taking place. I liked the idea of making sessions more relevant to students, rather than trying to guess what they would need. So, in a couple of my sessions this term, I implemented a simplified version of this technique, with mixed results.
For one 200-level writing class, the instructor told me ahead of time that students were having difficulty finding peer-reviewed resources, even though it was something she knew they’d all had to do in their 100-level writing courses. For that session, I created a quick survey (using Google Forms, though you could probably use other tools, as well) that asked students what their topic was and an open-ended question about what they identified as their problem in finding the peer-reviewed resources they needed. I placed a link to the survey in the online research guide for the class and asked them to fill in out in the first four minutes or so of class. I was able to open the Google spreadsheet the results were entered into and show it to the whole class and pretty much ran the IL session based on the problems they reported. In this scenario, the technique worked great. I was able to address conceptual problems they identified in the survey in my lecture and demonstration part of the session, as well as identify individual students to help with more specific problems in the practice portion of the session. I felt like the session was more relevant and engaging to the students, and the instructor sent me a couple of follow-up comments that indicated that the students found it useful as well.
The second time I used it was in a 100-level writing class, where the IL session focused on evaluating web sources for writing an evaluative essay (lots of evaluation going on!). Again, I asked students for their topics (I always like to use their topics as examples in my lecture/demonstration), and I asked them a close-ended question about the criteria they were developing for their evaluation. In retrospect (though I should have realized it beforehand), the close-ended question was a pretty bad idea, as it didn’t relate most closely to the information literacy outcomes of the session and therefore didn’t do a very good job of directing my instruction. So, I pretty much ended up doing my normal web resource evaluation lecture, with practice time in the second half. I may have fallen prey to the temptation of incorporating a new (to me) technological technique just because I could – not because it was really needed.
I will definitely try this technique again in the future, though, with these lessons learned from my very brief experience:
- Think about the questions to ask, and make sure they align with the session topic and outcomes
- Avoid close-ended questions, as these don’t provide enough direction on where to take the session
- Keep it short – 2-3 questions
- Have a backup for if/when the technology fails