Pre-Application Webinar Video: Effectiveness and Services Trials
Transcript
PATRICIA AREAN: Okay everybody we're going to go ahead and get started. Welcome to our webinar on the updates to the clinical trials announcements. Today we'll be talking about pilot effectiveness research and research in mental health systems and services.
I'm Pat Arean, I am the director for the Division of Services and Interventions for Research. And with me is Joel Sherrill, who is the deputy director, Mary Rooney who is in our treatment interventions and prevention branch, Shahrzad Mavandadi, who's in services, and the clinical epidemiology branch, Aileen Schulte, Nick Gaiano and Tamara Kees who are in the Division of Extramural Activities.
Please use the zoom Q&A feature to post any question. Participants can enter questions at any time during the presentations. We are recording this webinar, and we will send out an email blast as to where it will be hosted so that you can share this with your colleagues, or you can come back and look at our discussion today.
I do want to apologize for the very quick turnaround in terms of the announcement of this webinar and when it was posted. Events beyond our control, but we're glad to see that some of you have been able to make it today. I'm just going to give an overview about the reason for the changes to the suite of clinical trials PARs that specifically are funded under the Division of Services and Interventions Research. I'm also going to go over the specific changes to pilot effect effectiveness announcement and overview of the new changes to pilot effectiveness announcement, an overview of the new services and systems, effectiveness announcement. And then Dr. Sherrill will go over the simplified peer review, an overview of forms I, which is also new, and then we'll take time to answer your questions.
So, in coming to DSIR last year, the DSIR Staff and I did a quick review of our portfolio and in particular the budgets for the R34 pilot effectiveness studies and the yield of the R01 effectiveness and services announcement in terms of how applications we were getting specifically in service and system level interventions were fairing. Our summary is basically this: the R34 amount has not kept up with the cost of living; intervention and adaptation study designs are far more complex than they used to be. Therefore, we felt that this is an opportunity for us to right-size the funding for this particular type of research.
Effectiveness Research and Services Research have different purposes and focus on different targets. Effectiveness is focused on the individual or the organism. Services is focused on the systems of care or the organization. We still encourage a mechanistic hypothesis in both types of trials, but the methods used to study mechanisms in system level interventions are different than those the methods used to study therapeutic mechanisms, so we felt it was important to have a separate RFA or PAR for clinical trials to study systems and services interventions.
Another reason for separating out the services and systems level interventions research is that we feel very strongly about partnership building in systems research. We feel this is highly critical and are encouraging a more thoughtful partnership process moving forward so that we can deploy best practices as soon as possible. In both cases we did a thorough review of funding mechanisms that would address the cost-of-living challenges, provide more support for partnership building, and hopefully will speed the translation of science into service. Now I'm going to hand this over to Mary Rooney who will talk about the changes to the pilot effectiveness research.
MARY ROONEY: Great, thank you, Pat. So as Pat mentioned, I'm talking about the pilot effectiveness research announcement, specifically pilot hybrid effectiveness-implementation trials for our mental health interventions.
So, I'll quickly recap the purpose of this, and which is to solicit pilot hybrid effectiveness-implementation research that's consistent with NIMH’s priorities for refining and optimizing preventive and therapeutic interventions with previously demonstrated efficacy for use with broader target populations or for delivery in routine care, school, community, or online settings, as well as research on implementation strategies that support the delivery and sustainability of evidence supported interventions in accessible settings. With our reissue of this announcement, as Pat mentioned, the largest changes really are to the budget and the funding mechanism. And there's one change to the scope.
Starting with the funding mechanism and the previous version, pilot studies that were originally funded under a R34 announcement is now funded under an R01 mechanism. However, the scale and scope of research is squarely focused on the pilot domain with an emphasis on assessing the feasibility, acceptability of the approach, and generating the preliminary data that is needed to support a well-powered full-scale R01 trial in the future. The direct costs for the projects have now increased to a total of $750,000 over the course of the project, which is 3 years and utilizes a modular budget, even though it's an R01.
In terms of the scope of the research and what we're soliciting, it remains largely unchanged except that we are now requiring the inclusion of a pilot hybrid effectiveness trial, which includes methods for evaluating intervention effectiveness as well as implementation factors. In our prior version, this was something that we encouraged, but it wasn't something that was required. So that's a pretty significant change and something to attend to because it will be something we'll be looking at in terms of whether or not the application is responsive to the funding opportunity announcement.
One additional very important point to know is that this announcement, because we have switched from an R34 mechanism to an R01 mechanism, we will not be able to allow re-submissions of previously reviewed R34 submissions. If you have an application that you submitted and was reviewed under the R34 mechanism, you will need to submit a new application under this R01 mechanism. You'll not be able to submit a resubmission from your prior applications. At the end, I'm happy to take additional questions through the Q&A.
PATRICIA AREAN: Thank you, Mary. And now I'm going to hand this over to Shahrzad Mavandadi.
SHAHRZAD MAVANDADI: Thank you. So, I am pleased to provide a brief introduction to a new clinical trial funding announcement, effectiveness trials to test system interventions. The purpose of this funding opportunity is that it uses an R61/R33 exploratory developmental phased award mechanism to support milestone driven feasibility and infrastructure development during the R61 phase, followed by a clinical trial during the R33 phase.
In this instance the R33 is not considered developmental but should be used to test the effectiveness of system interventions or implementation strategies. In terms of the award information, please note application budgets are not limited or capped and should reflect the needs of the proposed work, and your work cannot exceed 5 years. This includes a maximum of up to 2 years for the R61 phase, and a maximum of 4 years for the R33 phase. The scope of the project really should determine the project period for each of those phases.
Just to provide a brief context for the research supported by this announcement, systems and services interventions are operationalized as those that target systems of care, rather than individuals and attend to issues such as access and adoption, delivery and organization, utilization, and quality mental health services, with the goal of improved clinical-functional population level outcomes. The focus of the intervention could be across multiple organizational systems or settings or at multiple levels within a single organization or setting. The research and methods proposed here go beyond assessing whether the intervention is effective or not and should advance our understanding of and test specifically how, why, for whom, or what settings or circumstances the system intervention may be effective. Make sure to include a detailed analytic plan and the specific variables that can be used to address those questions.
When preparing your application, and this is a bit different, we're switching to the R61/R33, so it'll be important to take into account the bi-phasic nature of the award and follow any specific instructions that are listed in the funding announcement or in the published announcement.
The R61 phase is typically a oneto-two year planning phase that largely focuses on collecting feasibility data and establishing the infrastructure that will be needed to conduct the R33 clinical trial. Applications must include a plan that delineates R61 milestones That signify the completion of major elements that are going to be necessary to support the larger scale project that's being proposed in the R33.
These milestones should include a description of well-defined, quantifiable, achievable, scientifically justified benchmarks and deliverables with thresholds that will allow program staff to assess progress during the R61 phase and that will establish the feasibility on the empirical basis for pursuing the R33 work. Therefore, activities may be varied and depend on the focus of the intervention and the stage of intervention development. The activities can include collecting feasibility or pilot data for the intervention or intervention components and situations where preliminary data are not available.
Developing research practice partnerships and engaging with the community and clinical collaborators and end users to ensure intervention, appropriateness and feasibility and potential sustainability and scalability is important as is demonstrating the ability to access key data sets and demonstrating the feasibility of recruiting, retaining, and randomizing participants for the trial that's being proposed in the R33, which brings us to that phase. The R33 phase is to provide the definitive test and test the effectiveness of that system intervention or strategies that are being refined during the R61. To transition to this phase (from the R61 phase), investigators will need to submit a package that includes a progress report, summarizing achievement of milestones and that material will be an evaluated by NIH Program staff and to help inform our R33 funding decisions.
In preparing your application, please do be mindful of those specific instructions for application submission that pertain to the information that's necessary to provide and the formatting for the 2 phases of the project. I just want to end with a quick summary of the types of studies that would not be responsive to this funding announcement, and if submitted unfortunately would not be reviewed or considered for funding. These would include applications that don't explicitly test how and under what circumstances the intervention may be effective, that do not include the analytic plan and specification of variables proposed to test interventions or implementation strategies studies in academic research settings as opposed to routine clinical care or community settings, studies that propose to test implementation strategies that focus only on the individual level. (Such studies can be submitted to the Full-Scale Effectiveness Trails R01).
Finally, applications that propose health literacy interventions without specifically examining the impact on factors like access and engagement or outcomes of care.
If you have any questions, again, regarding responsiveness and program priorities that relate to this announcement, please feel free to reach out. And thank you.
PATRICIA AREAN: Thank you so much. So now I'm going to hand this over to Dr. Sherrill to discuss the simplified peer review framework.
JOEL SHERRILL: Okay, thank you. I’m going to very briefly highlight a few NIH changes to the application and review processes, and we note that these changes are not specific to these particular PARs, rather in both cases, these are NIH level changes that apply to applications submitted on or after January the 25th 2025. These changes apply to the 1st receipt date for the 1st receipt date for the PARs we're covering in this webinar.
Based on input from the field NIH developed revisions to the peer review process in a simplified framework for NIH peer review criteria as described in notice NOT-OD-24-010 the new framework for peer review applies to various project grant applications submitted on or after January the 25th 2025. The simplified review framework is intended to focus reviewers on key questions, specifically should the proposed research be conducted and can the proposed research project be conducted.
The simplified framework for NIH peer review reorganizes the former 5 review criteria into 3 factors. The significance and innovation criteria are now included under factor one: importance. The approach criterion is now included under factor 2 which covers the approach, rigor and feasibility. And the investigators and environment criteria are now included under factor 3, which includes expertise and resources.
These criteria are to focus reviewers on 3 central questions. Factor 1, how important is the proposed research. Factor 2, how rigorous and feasible are the methods. Factors 3, whether the investigators in the institution have the expertise and the resources necessary to carry out the process the project.
Next notice, NOT-OD-24-086 informs applicants of changes to the grant application forms and the application guide instructions. Specifically, a transition from Forms H to Forms I for applications submitted for due dates on or after January 25th.The application guides and Forms I application packages are posted to the “how to apply” application guide.
Briefly, some key changes to research project grants under Forms I are outlined in notice NOT-OD-24-086 and these include a new attachment field for the recruitment plan to enhance diversity on the PHS 398 research training program plan and required use of standard forms for biographical sketch and current impending other support. Again, these application packages are attached to these new PARs.
Before we take questions about the effectiveness and systems intervention PARs, next we thought we would summarize and compare the PARs to help address some potential questions about how to decide which PAR to use based on the nature of the intervention you are studying, the stage of science, etc.
In terms of the nature of the intervention, the pilot effectiveness and full scale hybrid effectiveness PARs are relevant for testing preventive and therapeutic interventions across modalities that includes pharmacological interventions, psychosocial interventions, digitally facilitated interventions, combinations of interventions, etc. They are also relevant for testing other interventions that target the behavior of individuals. For example, interventions to promote service users’ access to or engagement in mental health services and implementation strategies that target provider behavior for training or supervision strategies, for example.
In contrast, the systems intervention PAR is used for interventions that target systems rather than individuals, the focus could be at multiple levels within the system, including the service user, provider, and the system.
In terms of the stage of intervention development across these PARs for effectiveness trials testing individual level interventions, the pilot hybrid effectiveness implementation PAR is used for early stage pilot testing, while the full-scale effectiveness implementation PAR is used for trials that are informed by pilot data and designed and powered to definitively test effectiveness. The systems level intervention PAR uses a phased, milestone driven approach to move from feasibility to full-scale testing. The R61 phase involves milestone driven feasibility and infrastructure development. The R33 phase involves trials that are designed and powered to definitively test effectiveness.
Relatedly, the goals of the trial differ depending on the stage of intervention development and testing. For pilot hybrid trials, effectiveness trials, the goals involved examining feasibility, the intervention strategy, a preliminary evaluation of the interventions impact on outcomes and at least one hypothesized mechanism and importantly collecting data needed in support of a subsequent definitive trial. The full-scale hybrid effectiveness implementation trial should be designed and powered the test outcomes, examine mechanisms, and examine factors that impact implementation. And the phased system intervention project should focus first on establishing feasibility and establishing partnerships and preliminary examination of effectiveness; and then in the R33 phase, definitively testing the intervention’s effectiveness and examining factors that account for or moderate the effectiveness.
The PARs also differ in terms of the prerequisite data that are required for each application. For the pilot effectiveness implementation PAR, pilot data regarding the effectiveness of the intervention are not required. Rather, pilot testing is in fact the goal of these projects. But information supporting the feasibility of conducting the pilot work in the practice setting is important to include. For the full-scale effectiveness implementation PARs, there should be pilot data regarding the intervention’s effectiveness in support of the hypotheses. And for the phase system intervention PAR, the prerequisite information differs for each phase. For the R61 phase, there should be some prior evidence supporting the feasibility of completing the work in the practice setting. For the R33 phase, the prerequisite information comes from having met the R61 milestones including evidence supporting the feasibility of engaging and practice partners and feasibility of enrollment, etc.
Finally, the three PARs differ in terms of the project period based on the scope of work. For the pilot effectiveness and implementation PAR, these modular R01 are limited to no more than 3 years. For the full-scale effectiveness implementation trials, the project period should be justified based on the scope of the work and the maximum project period is 5 years.
For the phased services intervention PAR, the maximum project period is 5 years. But depending on the current state of intervention development, applicants have some flexibility. The R61 can be 1 to 2 years. And the R33 can be 3 to 4 years. We really want to emphasize that the information that Dr. Rooney and Dr. Mavandadi presented is included in the PARs in the announcement. This is merely a summary today. And we want to emphasize the importance of reviewing all the sections of the PARs before applying.
For example, Section I describes the background and purpose of the PAR and details the scope of research, including examples of responsive and non-responsive studies. Section IV includes very critical information about what to include in the research section and in the application. These are the application instructions to build and submit a responsive application. In parallel, in section 5 reviewers are asked to address review criteria specific to the PAR. The review criterion Section V parallels the instructions in section 4, so it behooves all applicants to review the instructions and these review criteria to ensure that their applications are responsive and complete.
We also want to note that for each of these announcements, program contacts are listed at the end. We always encourage potential applicants to contact us as far in advance as possible before submitting an application to discuss the match with NIH priorities and the match to the various PARs that we use for effectiveness and system intervention research. We recommend that you start that process by emailing the program contacts listed in the announcements with a brief 1 to 2 page description of the project you're considering so you can follow up the discussion.
PATRICIA AREAN: Okay, I think that moves us to our questions and answers period. We will be looking at what you all are entering in the question and answer function in Zoom.
PARTICIPANT QUESTION: Right in the beginning there was a slide which mentioned something about modular budgets.
PATRICIA AREAN: What that means is that the, each year is capped by $250K in direct costs and you don't have to write a detailed budget with a modular budget. You just you do have to write a justification.
PARTICIPANT QUESTION: Is there any platform on the NIH website through which we can understand the assist process, you know, how to submit the applications because I understand they take a lot of time it takes about 5-6 weeks to complete the NIH form for any submission. And there are a lot of details that are required so is there like an upcoming webinar or any resource material on how to fill in the application.
JOEL SHERRILL: Okay, there are various resources on the NIH website about the submission process and that's probably the best place for an introduction to the process. Also, if you're with a university or a research center that has a research office or a business office, they can be an excellent resource for helping you with the application information.
PARTICIPANT QUESTION: Please provide examples of implementation strategy outcomes at the systems level for the R61/R33 versus implementation strategy outcomes at the systems level for the R6,133 versus implementation strategies measured at the provider level for the R01.
PARTICIA AREAN: So, the best way to think about the system levels R61/33 is that many of the implementation strategies that are going to be combined to address access, quality, cost and have already been tested as specific implementation strategies.
For example, provider training would be an example of a hybrid type 3 comparing 2 different types of training strategies to support the delivery of evidence-based practice by providers would be go to the R01, whereas a systems level intervention would utilize several implementation strategies to get at better access to care, better quality of care.
Good examples of that are the collaborative care studies for depression, as well as coordinated specialty care for early psychosis. These are usually packaged interventions that address several factors that we know interact in a very specific way to impact the quality, accessibility and fit for purpose of interventions that already found to be effective, when we already know what the mechanisms are for those specific person level interventions. We're just trying to get those practices out into the field, and we need to understand how to address the system and service barriers of high quality care delivery.
PARTICIPANT QUESTION: The next question: is the R61/R33 considered similar to the R01 budget type.
PATRICIA AREAN: It is uncapped so very similar except for you will have a budget for the 61 phase and then a budget for the R33 phase.
PARTICIPANT QUESTION: Are technical innovation projects using unconventional brain imaging and AI, something that would be considered by the program, is it suitable to submit without any preliminary data?
PARITICA AREAN: You ask a very important question. It's one that we're grappling with here. I think it really depends on what you mean by, let's say AI or how AI is being utilized and the amount of data that's available to support its, it's effectiveness.
For instance, if you are looking at AI to support an evidence-based treatment, we see that as optimization of an intervention using mathematics or engineering principles.
If you have some preliminary signal in your intervention that that would go to the efficacy PAR, which we did not discuss today. You would need to test the efficacy first of that intervention before you put it into the it into a test of its effectiveness, which would be much more at a population or specific population level. Joel, do you want to add anything to that or Mary?
JOEL SHERRILL: I think that's great. I just think that the question also included unconventional brain imaging. If this is a procedure that's not been tested or demonstrated efficacious, there are other announcements that are appropriate for initial treatment development and testing.
PARTICIPANT QUESTION: Will slides be available after the after the webinar.
PATRICIA AREAN: They will be, and we will be putting them out there you know post letting you know as soon as we can where those slides are posted.
PARTICIPANT QUESTION: If someone hypothetically applied for the pilot hybrid would they still be eligible for the full five-year follow-up effectiveness or would they need to complete the effectiveness component in only 2 years, a total of 5, which would, or technically a total of 8.
PATRICIA AREAN: I think if I understand your question correctly, if you apply for the R01 for the pilot effectiveness and collect feasibility data it's very similar to the R34 it's just that the budgets increased and the requirement for the hybrid trials is for both R01s for the pilot effectiveness and for the full test of effectiveness.
So, you could theoretically apply for a 5 year project, but you need to justify the, the timeline like you would in any, application.
PARTICIPANT QUESTION: I'm submitting a grant due on January 30th do I use the new forms I or I have as I've been currently working on the forms. Cool.
JOEL SHERRILL: The announcements we're talking about do not have January 30th receipt dates. You might be talking about another, funding opportunity announcement or notice of funding opportunity. Follow the guide instructions for that are attached to the mechanism you're using and the announcement for the announcements that we're talking about today, they all have received dates after January, the 25th 2025 and they will use the forms I. If any of our colleagues in DA want to elaborate, we welcome additional input. Okay.
KAREN GAVIN-EVANS: That is correct. They are February date’s so they will use the forms I which are being associated with the packages at this time. Thank you.
PARTICIPANT QUESTION: Do all of these funding announcements require a clear test of target engagement as has been the case in past PARs.
PATRICIA AREAN: For the pilot effectiveness, you would be proposing that there would be targets that you would be measuring and you would have to demonstrate the feasibility of being able to do that in the full scale effectiveness study. Likewise, because you were testing the effectiveness of an intervention you need to confirm that the targets for that intervention are retained because of the need to test that for novel populations, or adaptations that you may have made to the intervention. That's still required there for the services interventions. We're interested in the mechanisms that you think that the service intervention is using to address quality reach, etc.
So, your measures are going to be less our RDoC- like or at the person level, they're going to be more at the systems level.
PARTICIPANT QUESTION: For the full scale effectiveness implementation trial announcement, can you speak to the level pilot data that is needed?
PATRICIA AREAN: It would be similar to what you would do in any of the, R01, in the past where you would need to demonstrate the feasibility of your study design, preliminary proof of concept for the intervention. Nothing's different there.
PARTICIPANT QUESTION: For the analytic strategy required by the R61/R33 I'm assuming that the grant needs to provide a strategy to be tested in the R33 not the R61 phase, correct. And would this be the case that the R61 phase may be used to refine the strategy?
PATRICIA AREAN: Yes. So, the main test of the intervention effect is in the R33, the R61 is used to for things like partnership building, making sure that you can actually extract electronic health record data, making sure that you know the ability to do the implementation strategies, or the service package is feasible.
It's that timeline you need to work out those kinks, including whether it is feasible. Theoretically the interventions that you're testing already have gone through that effectiveness test and re confirmation of the target mechanism, but if you wanted to continue to measure other things like that you could use the R61 phase to determine the feasibility the burden, etc., for the R33 phase.
PARTICIPANT QUESTION: And a second question from GUEST is wondering about the changes to peer review framework, the framework, can it be done?
PATRICIA AREAN: What motivated NIH-level change I think Karen might be better to answer that question.
KAREN GAVIN-EVANS: Thanks, Pat. I appreciate that. So, for the changes, the impetus for that was to address review bias and looking at finding ways to reduce review a burden but also bias in the review process.
So, you'll notice with the retooling of the Section V, there are no longer questions but statements, but they're an attempt to find a better way to get at the information that's necessary without introducing the information that's necessary without introducing bias. I would say something that Joel mentioned before, making sure you look at each section. I would hone in on sections IV and V explicitly, when you're looking to understand exactly what is necessary to answer the call for the particular funding announcement.
So do hone in on what is being asked of you and what the reviewers are going to be evaluating you on. And that is really the impetus for the change in NIH is trying to really make sure that the science and the merit of the science is being focused on, as opposed to the individual or the individual institution. I hope that's helpful. Thanks.
PATRICIA AREAN: Thank you, Karen. Tamara, thank you for putting the link (Using ASSIST to Prepare Your Application | Grants & Funding) in the Q&A regarding submitting the application via assist at the NIH training site with a video showing how to submit the application via assist. You can screenshot or maybe even hover over to add into your browser history.
Aileen Schulte put in the Q&A the NIH website on the Simplified Review Framework, which may have more details about the background of the initiative.
PARTICIPANT QUESTION: Why do we continue to focus on efficacy and population, rather than on individuals.
PATRICIA AREAN: Not sure what the question is asking, but my interpretation or my answer is that we do test, intervention efficacy at the individual level, and we test the effectiveness of those interventions at the population level. And the rationale for focusing on individuals is to determine what the safety effect effectiveness efficacy issues are for that intervention at the population level we're interested in how to best reach you know what the differential effects are of treatment across populations as well as ways to make sure that people can get access to high quality care.
Since there are no additional questions. Any final thoughts from our crew?
Thank you very much for your time and I apologize for the last minute notification, but glad to see very good attendance here. We will be sending out a link soon with the recording of this webinar.
Good luck with the rest of your day.