ResLife Student Staff Training: Behind Closed Doors, Is It Working? (Part 2)

Across the country a week or two before all students return to campus, we see housing student leaders come back early to prepare and train for the upcoming year. This often involves team building, crisis management, facetime with campus resource leaders, and hopefully some time carved out for hall preparations. After the whirlwind of the training season and opening is completed after catching our breath, there is a question lingering over housing departments: did the training work?

Posts in this series:
Part 1 | Part 2 – Behind Closed Doors | Part 3

Across the country a week or two before all students return to campus, we see housing student leaders come back early to prepare and train for the upcoming year. This often involves team building, crisis management, facetime with campus resource leaders, and hopefully some time carved out for hall preparations. After the whirlwind of the training season and opening is completed after catching our breath, there is a question lingering over housing departments: did the training work?

In the first part of this series, we discussed Kirkpatrick’s Four Levels of Training Evaluation (1994) and how to assess if learners are enjoying training and if learning transfer occurred. As a reminder, here is Kirkpatrick’s model:

A visualization of Kirkpatrick’s Four Levels of Training Evaluation, a four-level pyramid. Level 1 says reaction with the question “did the learners enjoy the training", level 2 says learning with the question “did learning transfer occur”, level 3 says impact with the question “did the training change behavior”, and level 4 says results with the question “did the training influence performance”.

Source: KudoSurvey

In this blog, we are going to dig back into level 2 specifically looking at observations to see if learning happened. This is how we can use Behind Closed Doors style activities as a form of assessment. For those that may not have heard of Behind Closed Doors before, they are a hands-on activity where a situation is acted out such as a roommate conflict, alcohol violation, or traumatic disclosure and then the resident advisors act out their responses. Here is a quick video from Acadia University that explains what Behind Closed Doors often looks like.

Should We Even Be Doing Behind Closed Doors?

Behind Closed Doors is a very divisive topic, in my opinion rightfully so. These activities can function more like hazing than a continuation of learning and training if approached incorrectly, actively causing harm to students and staff. I believe that with the right format and approach Behind Closed Doors can be a beneficial experience, ultimately allowing a department to know if their resident advisors can apply their learning to a crisis situation. I am going to share more about this in a future blog post, but here I am going to assume you have decided to do some kind of Behind Closed Doors activity and we can explore together how you can use this activity as a form of assessing learning application.

Level 2 – Learning

As a reminder, measuring learning is not the same as measuring confidence in learning. There is value in being able to understand confidence in learning for things like deciding on a continuing development schedule, however, I will not be focusing on that approach here.

I would recommend building both a test of learning and some kind of practical practice observation, like a Behind Closed Doors activity, for level 2. We explored how to build a test of learning in Part 1 of this series, here I am going to show how to assess a Behind Closed Doors experience with an observation protocol. Similar to testing, the expected outcomes for each of your Behind Closed Doors experiences will guide you in creating your observation protocol. In this case, an expected outcome is a statement of what participants should showcase during a situation. For example, if resident advisors are participating in a traditional Behind Closed Doors alcohol violation scene then the expected outcomes for that scene might be:

  • Resident Advisors follow proper knocking procedure
  • Resident Advisors request music to be turned off
  • Resident Advisors do not close the door to the hallway
  • Resident Advisors request that all drawers are opened by the actors
  • Resident Advisors do not open any drawers themselves
  • Resident Advisors request that all bottles of alcohol are emptied down the sink
  • Resident Advisors request that all bottles of alcohol are thrown away
  • Resident Advisors document the names of all actors
  • Resident Advisors document the Student IDs of all actors
  • Resident Advisors share that the situation will be documented
  • Resident Advisors share that the actors should expect communication from the Resident Director regarding their conduct meeting
  • Resident Advisors do not make promises about the outcome of the conduct meetings

These will likely be different depending on your school and your policies when approaching different duty situations. However, these policies should have already been covered during training so your resident advisors can apply that learning in this activity. Now these expected outcomes can shift easily into a rubric that becomes your observation protocol. A rubric is a way to assess and share the specific aspects of a scenario you are looking at. I would recommend having your supervising housing staff complete a rubric during each observation.


Examples

If you use Roompact, you can create one form with all rubrics and edit a submission after each observation to keep all of the data in one form. Another way to accomplish this would be to create a repeating group of rubric questions within one form (but you’ll want to keep your rubric items general, since you’d be using them for all scenarios). An added benefit of Roompact is that you you can also include a question in your form asking to tag the staff member being evaluated and select “sharing enabled” to have a copy of the rubric automatically sent to the staff member. This can act as a transparent feedback tool that can be reviewed later.

An example of a rubric including a "tag a staff member" question, a dropdown for the scenario, and a repeatable group of rubric questions.

If you use Qualtrics or Google Forms to administer your rubrics, you can use skip logic to the different rubrics for each scenario and have all your data collected in one place.

A three question Qualtrics survey. The first question is a multiple choice question asking about staff affiliation, the section is a fill-in blank asking for the Resident Advisor(s) participating, and the third is a multiple choice question asking to select the Behind Closed Door scenario.
A one question Qualtrics survey. It is a Likert scale question block that asks whether Resident Advisors did or did not complete each of the expected outcomes.

Having a rubric like this in front of your supervising housing staff also serves as a reminder of what to provide feedback on when moving through a scenario, which can be especially helpful to new people to your professional team. From an assessment standpoint, you could also add in a writing space below the rubric for positive and constructive feedback shared during the session. This is a great way to see if there are patterns of behavior that may not be aligned to the expected outcomes that still need to be addressed.

Conclusion

Look at you, you now have rubrics and observation protocols as tools in your toolkit! Remember, by practicing meaningful and strategic assessment your department can more effectively tell your story and have actionable data.

I will be back soon with part 3 where we will explore how we can explore if training practices have actually impacted the behavior of resident advisors across the fall.

Comments are closed.

Up ↑

Discover more from Roompact

Subscribe now to keep reading and get access to the full archive.

Continue reading