Post-Pandemic Impact and Feedback Research Program
Building a UX measurement + mixed-methods research system when traditional outcome metrics were unavailable
What you’ll see
How I defined decision uses (renewals, product improvements, implementation guidance)
The research program design: annual survey (1,000+ teachers; 200+ leaders/year) + qualitative case studies
How I ran an RFP and managed external research partners while maintaining methodological quality
Key insights that informed decisions (e.g., sustained emphasis on mentoring based on evidence)
How I enabled internal teams to apply findings in real partner conversations
Client: Gradient Learning (supporting Summit Learning implementation)
Timeline: 2020–2021 (launched); annual survey continued in 2022 and 2023
My title: Project Director (advocated for and helped create a dedicated Research Lead function)
My role: Research program lead (strategy + operations + vendor management + enablement)
Users / audiences: Teachers and school leaders (participants); executive leadership; product team; implementation/support; communications; renewals/account teams
Methods: Annual survey (1,000+ teachers and 200+ school leaders/year); mixed-method case studies (surveys, interviews, focus groups); vendor-led fieldwork with internal oversight
Deliverables: Published annual teacher/leader survey report + published case studies + internal professional learning on how to apply findings
Outcomes (high level): Established a repeatable evidence system to guide renewals, product improvements, and implementation guidance; reinforced continued emphasis on mentoring as a core feature based on consistent findings.
The problem
After the pandemic, the usual indicators of impact (test scores and other stable measures like attendance or graduation trends) were unavailable or unreliable. Without a new approach, the organization would lack credible signals about whether the program was supporting teachers and students, and would lose evidence needed to guide product decisions, implementation guidance, and renewal conversations.
What we needed to learn
How were teachers and leaders experiencing implementation post-pandemic?
What parts of the program were most valuable to users (and why)?
What was driving success in high-performing contexts that could be replicated?
What evidence could credibly support decisions about renewals, product improvements, and implementation support in the absence of traditional metrics?
What I owned
As Project Director, I led end-to-end creation and rollout of the research agenda. I owned:
Designing the research strategy and annual cadence (survey + case studies)
Advocating for the creation of a Research Lead function and aligning with executive leadership
Creating the RFP, selecting outside research partners, and managing the vendor process
Overseeing survey development and qualitative protocols
Coordinating with partner districts and schools to drive participation
Contributing to synthesis and guiding how findings were packaged for different audiences
Leading internal professional learning so account managers/coaches could apply results in implementation and renewal conversations
Research approach
To replace missing external outcome metrics with credible, decision-ready evidence, I designed a mixed-methods program with two complementary components:
Annual teacher + leader survey (scalable measurement)
A repeatable instrument to track satisfaction, perceived impact, implementation conditions, and outcomes that mattered in context (e.g., instructional differentiation, belonging/mentoring, and views of student achievement impact).Case studies of high-performing schools (explanatory depth + narrative evidence)
Qualitative case studies to explain the “why” behind the survey signals and surface replicable conditions for success. We completed:
Two in-depth case studies of specific schools that included all teachers/leaders implementing, and
One cross-school case study of 10 schools based on school leader interviews (one leader per school).
Participation was a key operational priority: for the case studies, teacher response exceeded 90% and leader participation was 100% across all three case studies.
What we delivered
Published annual survey report(s) capturing teacher and leader perspectives at scale, including year-over-year continuity through 2022 and 2023.
Published case studies that documented implementation conditions and success patterns in high-performing contexts.
Internal enablement (professional learning + talk tracks) that helped frontline teams use findings responsibly in renewal and implementation planning, and helped communications teams leverage findings with appropriate context.
Impact
This initiative created a sustainable evidence system at a moment when traditional evaluation signals were limited. It strengthened the organization’s ability to:
Support renewal conversations with credible, research-based evidence,
Prioritize product improvements using user-reported value and outcomes, and
Refine implementation guidance based on what high-performing schools were doing differently.
One specific decision impact: findings reinforced continued emphasis on mentoring both as a core user value driver (belonging/support) and as a priority to maintain when updating the platform.
What I learned
Building a research agenda that gets used requires more than good methods. It requires operational design and strong timing, keeping the right stakeholders engaged at the right moments so the work stays connected to real decisions. This project reinforced how important it is to pivot when the environment changes. When traditional outcome metrics were unavailable or unreliable, we shifted to a mixed methods approach that produced credible proof points through surveys and case studies. I also learned that the final step matters as much as the research itself. Translating findings into clear products, messaging, and internal enablement is what made the work repeatable and actually usable across product, implementation, communications, and renewal conversations.