Webinar: Timely and Adaptive Strategies to Optimize Suicide Prevention Interventions
Transcript
STEPHEN O'CONNOR: I see that people are joining now. Welcome. We are just going to give it a couple of minutes so that everybody can filter in.
Hi, everyone. We're going to wait maybe one more minute to give everybody a chance to filter in. Thanks for your patience. We'll get started soon. Okay.
We can go ahead and get started. Hi, I'm Stephen O'Connor. I'm Chief of the Suicide Research Prevention Program in the Division of Services and Intervention Research at the National Institute of Mental Health. We are pleased that you joined us today.
We are hosting a webinar that's focused on timely and adaptive strategies to optimize suicide prevention interventions. I would like to start with just a little bit of housekeeping.
Participants have entered into listen‑only mode. So their cameras are off and mics are muted. There is closed captioning that's available for the webinar. You can use the Q&A pod to submit your questions at any point during the webinar, and we will do our best to address them.
If you have any technical difficulties hearing or viewing the webinar, please note that ‑‑ note these in the Q&A box and our technicians will work to fix the problem. You can also send a direct e‑mail to NIMH events at that web address right there, that e‑mail address. And this webinar is being recorded and will be archived on the NIMH website.
So, suicide prevention is an NIMH priority. The challenges and opportunities section of our strategic plan confronts the challenges that may lie ahead, and it describes some of the unique opportunities for scientific exploration to overcome these challenges and advance the understanding of mental illnesses.
Suicide prevention remains a significant challenge and is addressed in this section and throughout the entire strategic plan. You can visit our NIMH suicide prevention web page to learn more about current notices of funding opportunity announcement, webinars, workshops and research highlights.
We have a large suicide research team at NIMH, and we are working hard to develop novel approaches and also implement evidence‑based approaches into clinical and community‑based practice settings.
So, researchers have demonstrated the effectiveness of interventions to reduce suicide risk, especially reducing new incidents of suicide attempts and intensity of suicidal thinking. This has been demonstrated in both adults and in youth. Simultaneously, technological advancements in assessment have improved our understanding of the nature and experience of suicide intensity, which is a much more fluent and dynamic process than was typically characterized in cross‑sectional research designs in the past. There is now an opportunity to deliver more timely support in ways that match preferences and needs. Research in this area has the potential to enhance how well evidence‑based strategies work and to better match those unique experiences and preferences of those who are receiving the services.
This webinar will last for two hours, and it has five presentations and a moderated discussion. Today, we plan to cover these questions: Which data sources are best equipped to inform intervention strategies? Which intervention aspects should involve tailoring strategies versus simpler approaches? And then, how such approaches can be incorporated into providers' workflows within a healthcare setting.
Briefly, I would just like to orient you to our extramural division structure at NIMH. We have several different divisions and centers. And these map onto different goals with the strategic plan and what the intent is of each of those goals. And when we think about moving the science from preclinical efficacy testing to effectiveness testing in real world clinical settings, we refer to the goals in the strategic plan and the divisions are reflecting what those different priorities are.
So the division that I'm in, in services and intervention research, a lot of times what we are trying to do is fund research that confirms the efficacy of an intervention strategy and also testing out the effectiveness of something that has been proven to work, but placing it into the hands of providers in real world settings; and also really understanding the factors that impact the implementation of those intervention strategies.
We have a major emphasis on understanding how and why intervention strategies work because it really helps us understand essentially what are the nuts and bolts of the intervention; how are they working to reduce risk, what types of factors are they changing within individuals, within families, sometimes also within healthcare systems for the providers. Having that type of model allows you to move the science forward. Even if your intervention were not to be proven to be effective, it is still an important scientific contribution. And it also helps us understand disparities in care. Helps us understand why interventions work for some people in some circumstances, but maybe not for others in other circumstances.
So I would encourage you to contact me if you would like to discuss your research concepts. I can be reached at the e‑mail address there. So please feel free to reach out. Okay. I'm going to stop sharing.
And I would like to introduce our first presenter, Kate Bentley. She is at Mass General and Harvard Medical School.
So Kate, take it away.
Overview of Just‑in‑Time Adaptive interventions
KATE BENTLEY: Hi, everyone. Thanks so much for having me here today, and for that introduction, Stephen.
I will go ahead and dive right into an overview of Just‑in‑Time Adaptive Interventions to Optimize Suicide Prevention Interventions.
So from my agenda today, I will start by providing a rationale for why we might consider moving toward more timely and adaptive strategies to optimize suicide prevention interventions. I will then turn specifically to an overview of the just‑in‑time adaptive intervention and micro‑randomized trial. I'll then share an example pilot MRT that our team recently completed to inform a JITAI for suicide prevention. And then I'll end with a few key questions and challenges for the rest of the panel today.
So I'll start with just a bit on the scope of the problem. We know that over 49,000 people die by suicide each year in the U.S. and the suicide rate has increased by over one‑third in the past few decades, as shown here. In addition to suicide deaths, nonfatal suicidal thoughts and behaviors are prevalent. Recent data show that in the U.S. each year, over 13 million adults have serious thoughts of suicide. Just under four million make a suicide plan and about 1.6 million make an attempt.
Given this, we have fortunately seen a real proliferation of research that uses rigorous methods to better understand and predict suicidal thoughts and behaviors in recent years.
One such method that has truly exploded in suicide research is EMA, or Ecological Momentary Assessment. EMA involves frequent self‑report assessment of relevant affect and contextual states in one's natural environment and in real time.
We've not only seen countless different individual EMA studies of suicidal thoughts and behaviors, but also a growing number of systematic reviews and meta‑analyses summarizing this body of work. Showing just a few of those here.
One of the main things that EMA studies have shown us is that suicidal thoughts are dynamic. Here is an example of this from a 2017 study led by Doctors Evan Kleiman and Matt Nock that shows the tremendous variation in suicidal thoughts over a three‑day monitoring period. In this figure, each participant's suicidal intent intensity ratings ‑‑ or suicide ideation intensity ratings, excuse me, are shown on the line.
In this study, 94% of participants had a change over the course of hours in their suicidal thought ratings of more than one standard deviation. And over a quarter of people's ratings differed by at least one standard deviation from one survey to the next. Findings that are similar to this have now been replicated in other samples and by other research teams.
We also know that suicidal thoughts and plans can escalate quite quickly to suicidal behavior. Just one example of this is a study from 2017 that was not EMA‑led by Dr. Alex Milner that found that roughly two‑thirds of suicide planning steps happen within 12 hours of suicide attempt. And the median onset for most steps of suicide planning is within six hours of attempt. So overall, we've learned that suicidal thoughts and behaviors can vary quite rapidly over relatively short periods of time.
A key question that follows from this is how we might think about optimizing our existing suicide prevention interventions to better match the highly dynamic and also heterogeneous nature of suicide risk. In a recent paper led by Daniel Coppersmith, we argued that there are three ways in which most existing evidence‑based suicide prevention interventions may not be optimally matched to the dynamic and heterogeneous nature of suicide risk.
First, timing. Most evidence‑based interventions that we have involve contact with the provider in weekly sessions with little or no contact in between. This is not particularly well matched to how much we know that suicidal thoughts can fluctuate over hours, much less a week, and poses a real opportunity to improve the timing of our interventions and frequency by striving to deliver interventions when and only when they are needed.
And second, our existing interventions overall have relatively little systematic personalization or tailoring to a given person. They are pretty fixed across both people and time. This is especially relevant when we think about suicide risk, which we know is a problem that is very complex and heterogeneous across both people and times or context. Strategies that allow us to adapt intervention delivery in systematic and prespecified ways based on data that are collected from individuals over time have the potential to maximize not only how effective but also how efficient our interventions are.
And last, many interventions for mental health problems and suicide risk specifically lack maximal accessibility. They can be difficult to find, costly, time intensive and fraught with barriers like stigma and hesitance to seek professional support. We know most people who think about suicide don't engage in formal treatment.
Mobile devices offer us the opportunity to increase the reach of existing intervention strategies including by offering people support based on changes in risk outside of planned session times with providers. The just‑in‑time adaptive intervention, or JITAI, is just one example of an intervention framework with the potential to optimize suicide prevention interventions in each of these areas. JITAIs are designed to provide the right amount and type of support at the right times. This model is very well suited to adjusting conditions that escalate quickly during one's daily life. For example, things like stress, substance use, craving and potentially suicidal thoughts and behaviors.
JITAIs generally involve sending messages, reminders or prompts through a mobile device in real‑time and in real world settings. They are also timely. Intervention delivery is contingent on an individual's need and whether they are receptive to intervention at that moment.
In the JITAI, tailoring variables are used which can be collective using active measurement like EMA, or passive sensor data. And these are used to make decisions about when interventions are sent. Along these lines, interventions are tailored to an individual's dynamically changing context and status over relatively short time scales. So minutes, hours, or days.
And when discussing JITAIs, it important to also touch on the micro‑randomized trial or MRT. This is an experimental design for building, evaluating and optimizing JITAIs. MRTs involve sequential randomization, meaning that the same person can be randomized many different times over the course of a given trial. Randomization conditions can include whether or not an intervention is sent, but also what type of intervention is delivered.
This means a couple key benefits. First, due to many points of within‑person randomization, statistical power is maximized. And second, this can allow you to determine not only whether a given intervention is effective, but also which interventions work best in which context.
And before I go on to some of our recent work in the area, I would just like to make one more quick point. The JITAI is one type of adaptive and timely intervention design framework. And the MRT just one type of experimental design for optimizing adaptive interventions. As I noted, the MRT is used to build and refine interventions that adapt at relatively fast time scales, so days, hours or minutes.
The well‑known SMART design also allows for optimizing adaptive interventions but involves adapting interventions based on changes observed over longer periods of time like weeks or months. And then there is also the hybrid experimental design which combines more than one design type. Possibly an MRT and a SMART in the same trial to test multiple different intervention components that adapt over multiple different time scales.
Hybrid designs may especially have promise for suicide prevention given that we know risks can fluctuate over both fast and slow time scales. I think we will hear a bit more about these other designs later in the webinar. So for now, I just want to underscore that a range of different designs exist for optimizing different types of adaptive interventions.
So I will now move on to a recent example MRT that we conducted in the area of suicide prevention. Our team, including Doctors Walter Dempsey and Matt Nock, who have joined us today, recently completed a pilot MRT to determine the feasibility and acceptability of one component of a larger JITAI. So most, if not everyone on this webinar today is likely familiar with the evidence‑based safety planning intervention.
Recent work has shown us, though, that actual rates of safety plan use during psychiatric hospitalization, which is the highest risk known time for suicide, are relatively low.
We've also learned that using evidence‑based coping strategies to manage suicidal thoughts in the moment is associated with reduced proximal suicidal thinking and the risk of future suicidal behavior. This led us to wonder whether the JITAI framework could be leveraged to help promote safety plan use and effective coping during actual moments of elevated risk after hospitalization; therefore, potentially enhancing the single counter encounter safety planning intervention.
So in this trial, we enrolled 87 adults who were admitted to a psychiatric inpatient unit for suicide risk. Everyone developed a safety plan before they left the hospital. And then after discharge, they received six prompts on their smartphone each day to do brief EMA surveys after ‑‑ for 28 days after discharge. And these surveys included the questions you see here, current suicide urge and intent. And participants were paid one dollar for each survey they did. We embedded this MRT which was preregistered into a larger ongoing EMA study.
So our decision rule in this MRT pilot was very simple. When people reported experiencing elevated suicide urge or intent on a survey, they were randomly assigned to either no intervention or intervention. And the intervention was simple. It consisted of a brief series of smartphone‑based messages that provided specific recommendations of strategies for managing suicidal thoughts in the moment.
There was nine total interventions, each of which included either content from a person's own safety plan, or general evidence‑based coping strategies for suicidal urges.
The specific types of strategies we recommended also varied. You can see just two examples of these messages here. They were quite straightforward and light touch.
Thirty minutes later, we sent people another very brief EMA to assess their coping strategy used since the survey as well as suicide urge and intent. And they weren't paid for this follow‑up. So what we found was about 82% of people completed the 28‑day EMA and MRT period. About 5700 surveys were done in total, which works out to just under a 50% response rate. About 9% of surveys were randomized. This was about just under two‑thirds of participants being randomized at least once. And we had about an 81% response rate to the follow‑up surveys.
At the end of the study, people gave on average decently high scores on validated measures of intervention feasibility and also acceptability. And we also did in‑depth qualitative interviews that complemented these findings, but I don't have time to present today. So we also looked at preliminary intervention efficacy. This was using an extension of generalized estimating equations that are designed to provide estimates of time‑varying causal effects of the interventions.
Our primary outcome was proximal coping strategy use. So within hours after just‑in‑time intervention. Overall, we found that receiving any versus no intervention was associated with a positive, small but significant effect on in‑the‑moment coping, as shown here. We also looked at intervention moderators and found that neither pre‑randomization suicide urge or intent moderated efficacy. We did find, though, that personalized messages were more likely to be effective at promoting coping strategy use than general messages.
And then last, we looked at change in suicide urge and intent as the secondary proximal outcome. And although these effects were in the desired directions, they weren't significant.
So in terms of our next steps from this preliminary work, we're about to start a larger MRT that builds on our pilot. This upcoming project was funded last year as part of a larger NIMH P50 Center grant.
So in this project, we'll be stratifying EMA surveys into high, medium or low and no‑risk levels and testing different forms of both automated and clinician‑delivered just‑in‑time interventions. And ultimately our hope is to combine a more optimized version of this JITAI with other forms of adaptive intervention to try to build a comprehensive and also scalable adjunctive approach to helping to reduce risk after psychiatric hospitalization.
I will now close very briefly with just a few key questions and challenges in this relative new area of work, of which there are many. So one is what are the optimal data streams, tailoring variables and time scales to use when adapting these interventions? Indicators of risk or vulnerability that can be used as tailoring variables can be used passively, actively, and over many different time scales and frequencies, each of which has tradeoffs in terms of burden, engagement and accuracy.
And then also, what are the highest priority context and populations for suicide prevention JITAIs? I shared some initial work by our group targeting the post‑hospital period among adults. You will hear a little later today about some very exciting work among young people. And also the potential to take these approaches outside of healthcare systems and into community settings.
And third, how do we maximize engagement with JITAIs for suicide prevention. Even if we build something that is quite powerful, if people don't use it or don't engage with it, it's unlikely to be effective on a large scale. And then importantly, how and when do we approach integrating JITAIs within our existing healthcare systems. These, and there are many others that I hope we can dive into more a bit more later.
So in closing, I just want to thank the amazing collaborators, several of whom are on here today who worked on some of the project that I showed here today.
Thank you very much.
And now I am turning the stage over to Dr. Randy Auerbach.
Passive and Active Data Collection Strategies
RANDY AUERBACH: Great. Thank you for a really wonderful introduction to this panel. And thank you for inviting me.
My job in this panel is really to talk about a variety of different passive and active data collection strategies. And I'm going to focus on our work that has employed novel methods to improve the detection and treatment of depression and suicide among adolescents.
And why this is important, just to start here, is that the strength of our just‑in‑time intervention is contingent upon our ability to detect the stress in real‑time. And the umbrella of real‑time monitoring approaches is relatively broad. What I'm going to focus on, which Kate intimated in the previous presentation, is experience sampling methods. And these are active approaches to assessing how we are feeling and thinking and behaving. And generally the tools of the trade in this context are ecological momentary assessment which is really focused on what somebody is feeling here and now. And that is often compared to, say, daily or weekly monitoring which is capturing broader swatches of time. So how are you feeling over the course of the day or how are you feeling over the course of the week.
In addition to these more active approaches, our group in collaboration with Nick Allen, who is also here today, has focused on leveraging passive monitoring approaches so. And with passive monitoring approaches, we again focused on smartphones. But others on this panel, including Ewa Czyz, has notable papers on wearing ‑‑ on utilizing wearables.
But using the smartphone, our group and others have focused on metrics around GPS, accelerometry and keyboard inputs insomuch as these are metrics that we could then operationalize into psychological and behavioral phenomenology that is of interest to the outcomes, most notably here for depression and suicide.
As I will discuss throughout the course of the presentation, there are a wide range of issues to consider. And, indeed, I think all of us could probably provide a presentation on the issues to consider. But briefly, what I'm going to be sharing with you today is that we think deeply about sampling rates. So how often we are going to assess, you know, in the context of more active assessments and how long we are actually going to follow these individuals. And context, of course, matters.
Are we capturing the construct of interest? And I think that this is most notable when you're using passive monitoring approaches. So you're collecting a wealth of data, but can you actually operationalize into constructs of interest, particularly as it relates to psychologic phenomenon. And, of course, one of the challenges ‑‑ and this was highlighted in Kate's presentation alone ‑‑ is this idea of missingness. And so what level of missingness can your study bear while also still addressing the core aims? And what level of missingness can you bear when you're thinking about smoothing the data in terms of de‑noising in particular in passive sensing data.
I'm going to be sharing a number of different approaches that we have taken. The details of these studies are not of particular import. I'm going to be providing you a bit of an overview but kind of diving into why these are potentially good examples in terms of utilizing a wide range of strategies to start to detect the stress in real‑time.
And where I'm starting is in a study of high‑risk suicidal youth. This is a study that was co‑led by Nick Allen. It's a multi‑state project where data collection was both at Colombia and the University of Pittsburgh with David Brant. And what I'm showing you here is the utilization of EMA to detect the stress or capturing these real dynamic changes of effective states. And in this study, we followed adolescents for six months and we had four one‑week periods of EMA. And what I'm showing in figure A, what I think is ‑‑ really demonstrates this so well, is that this is just one subject's data. And this is capturing on the Y axis subjective stress ratings. And you are seeing that this individual has enormous variability over the course ‑‑ you know, from moment to moment and day to day in terms of their subjective stress severity.
And what is also interesting in this particular study is that we also captured context. And so what we are showing you here is, you know, when they are alone in red, when they are with family in green, when they are with peers in orange.
And then you could take these data in a really nuanced way and try to understand these fine grain differences of how it is manifesting itself across these different groups.
And here I'm directly comparing our higher risk group shown here in pink, these are our ideators and attempters in pink relative to our psychiatric controls in blue. And what we're demonstrating in this figure is that an interpersonal context, these higher risk suicide individuals are experiencing them as more subjectively stressful relative to their psychiatric‑controlled counterparts. Said differently, they are not really experiencing the benefit of being in interpersonal context in terms of mitigating the stress.
We also, and other groups have as well, successfully assessed using both daily and weekly monitoring of mood and suicidal thoughts and behaviors. This is a paper that was published a couple of years ago in which, again, we're capturing daily mood, so daily mood ratings over the course of six months in our ‑‑ in groups that are very high risk shown here in pink to lower risk in blue. And we also had weekly assessments of suicidal thoughts and behaviors.
And you could take these daily mood ratings and create weekly aggregates to understand what might be driving clinically significant ideation week to week over the course of the study. And it turns out that a one standard deviation drop in these weekly aggregates of mood led to a three‑fold greater risk in reporting clinically significant ideation. The level of ideation which we considered clinically significant because a licensed individual would follow up with a participant and conduct a suicide risk assessment. And when necessary, bridge to clinical care. So again, showing that you may know when to intervene based on both the daily and the weekly assessments.
The other issue here which I think is obvious in looking at this, is there were a variety of different patterns in terms of missingness in terms of both the daily mood ratings but also the weekly suicide assessments. And this issue of missingness is something that I'm going to return to later in the preparation.
If we are reasonably good at collecting active data begs the question, why should we go forth and collect passive sensing data. And I think for many of those who have utilized this approach the answer is somewhat obvious in that it provides less participant burden in the sense that you are just actively collecting this data in the background. It is highly scalable whether you are collecting on a smartphone or a wearable. It is typically easy to employ. It has enormous temporal resolution. Meaning you're capturing data within days, across days, across weeks and months.
And I think most notably, you're capturing these behavioral phenomenon outside the lab. Now I want to be clear that even though it is more easily accessible in terms of capturing the data, it still requires enormous vigilance from the investigators. And I'm happy to address some of those questions if people in the audience may have questions about how to collect these data with good levels of retention.
When utilizing the mobile sensing data, we've thought deeply as a group as to can we ultimately begin to utilize some of these data to replace some of our active assessments. And this was a study what was led my Lillian Lee, an extremely talented investigator who works with our group. And one of our hypotheses is that what we write in our phone in a given day may be a reasonable proxy for how we are feeling in a given day. And so using the EARS smartphone app that was developed by Casana we were able to keyboard inputs from our participants.
And what we did was we essentially extracted out all words. This was over the course of three months with 83 individuals using over 350,000 messages.And we could utilize Vader at the level of the message to extract out the sentiment of that message and then create an aggregate over the course of the day.
And it turns out that more positive sentiment as assessed through Vader predicted next day mood, even controlling out for the prior day mood. Using latent Dirichlet Allocation Modeling, which is a type of factor analysis, you could also extract out topics. And so shown here in figure B, there are certain combinations of emoji that were also covertly predictive of next day mood. And there's certain specific topics of words as shown here in figure C and D. You know, laughter‑related words were negatively correlated with depressive systems. Where disagreement‑related words were positively related with dysphoric symptoms.
You could also extract out other linguistic features. So in a separate study, this was led by Carter Funkhouser, we were interested in trying to understand can we predict when a depressive episode is beginning to emerge. So here we looked at sentiment again using Vader. And this was over the course of six months using 1.2 million messages or about 6.5 million words.
We also focused on personal pronouns. There is a rich literature showing that greater proportional use of personal pronouns is associated with things like rumination. There's a rich literature about the use of personal pronouns in terms of suicide risk. And insomuch as this is a study on depression, we were also interested in absolutist language, words like all or nothing that may capture cognitive rigidity.
And it turns out ‑‑ and when looking at these weekly aggregates of these linguistic markers and mapping this onto weekly A life ratings, so this is a timeline fallback interview that would note when somebody is experiencing depressive episodes, that an increase in personal pronoun use over the course of a week relative to one's mean connoted about a 1.2 greater odds of likelihood of a depressive episode.
We've also focused on looking at mobility metrics. So this is a study of high‑risk suicidal adolescents, again led by Nick Allen. And here we were really interested in understanding how, you know, mobility metrics may provide some degree of insight into suicide risk.
And so in the context of the study, we knew exactly when the suicide event occurred. For example here, show an attempt or an emergency department visit.And what we can do analytically is we can look at the 7‑day period preceding this event and compare this to every other 7‑day period for this individual over the course of the study. And it turns out that an increase in one's home stay over the course of ‑‑ in the prior week results in about a two‑fold greater likelihood that they would experience a suicide event in the subsequent week.
We see that this effect holds when controlling for baseline ideation and other mobility metrics like entropy and average distance traveled. We're seeing that it's predictive of next week but also holds true concurrently so you can detect risk in the same week.
Another issue that we and others in this panel and other researchers who do this type of work contend with is the idea of missingness. Missingness, of course, can conflate our capacity to identify distressed states. And there is a really talented investigator in our group, Paul Bloom, who recently published a paper trying to understand whether there were sociodemographic or clinical factors that predict missingness and also utilized the active, you know, weekly and daily prompts to determine what may be driving missingness within this study.
And what we find is that there really aren't any sociodemographic and clinical factors that may be driving missingness with our active and passive sensory features. But lo and behold, we are only getting about this 50% response rate with these individuals, these high‑risk adolescents. And this is relative to about 70% of daily uploads for our mobile sensing data across both sites. And when you remove the individuals who provided less than 10% of the data, there is an uptick to about 80% of daily uploads for these individuals.
And what is interesting about this paper is that Paul then sought to look to determine whether missingness in the data was related to subsequent clinical acuity; or whether clinical acuity, you know, drove subsequent missingness both in active and the mobile sensing data. And what's phenomenal about this paper and the work that he did is that we're really not seeing in this sample that missingness was in any way related to clinical acuity. This is not to say that it is unimportant; it's just saying that it is not ‑‑ it is not related to the missingness.
And that is important for all of the work that is going to be presented here today because ultimately these just‑in‑time interventions hinge on the fact that we can detect risk amongst our most clinically acute. Because if we can't detect risk in our most clinically acute, we won't be able to deliver these interventions. So the question becomes how can we leverage this information to improve treatment access.
And most of the work that's going to be presented today focus on just‑in‑time interventions, but Nick and I have been working on other studies where we are also trying to determine whether we could utilize this information just to enhance the delivery of routine clinical case.
As you can imagine, active assessments and passive sensing can improve monitoring in between sessions. It tracks both symptoms and behaviors between these sessions. And because we now are starting to operationalize what may connote a distressed risk state, it provides opportune moments to deliver nudges at times when they may be able to utilize these skills.
We recently started a randomized control trial among high‑risk suicidal youth receiving intensive outpatient dialectical behavior therapy and we are utilizing this Vira platform.
And we just ‑‑ the data collection is just underway. But what's unique about this platform is that there is a clinician platform where they can look at aggregates of the geolocation, accelerometry, keyboard logger, weekly surveys. And they are provided in both daily and weekly arrays that are interpretable and accessible to the clinician. And this is paired with the patient's smartphone app, which is both collecting these data, but also is providing nudges. And these nudges are delivering the clinical skills that these individuals are learning in the context of their session. So the sessions are meant to really optimize skill acquisition. And these nudges could optimize the utilization not just when ‑‑ not just how to use it but when to use it in the context of their distressed states.
We recently completed a randomized control trial in depressed young adults, and this was led by Lauren Weiner. And we ‑‑ in this particular trial we were leveraging behavioral activation, but what we found in this ‑‑ we compared a Vira plus a coach and a Vira self‑care is that there was remarkable retention. So about 73% of the Vira and the coach condition stayed in it for the 90‑day treatment relative to 43% of the self‑care. And we're seeing these really promising results. And so in the context of this Vira trial, it's really leveraging these ‑‑ the passive sensing data to deliver nudges as to when to utilize these clinical strategies. And we are seeing reductions of both depressive and anxious symptoms relative to the self‑care group.
Just in closing, I want to leave with a message of promise. I do think that there is enormous promise. And as Kate intimated, I think that there are also challenges ahead in terms of determining the appropriate use case. When and where and how should we be deploying these types of strategies.
The design is important. How far ‑‑ how long are we going to follow these individuals? What constructs have the validity that will allow us to dig in and really understand what is going on in the moment.
And missingness, of course, does matter. You know, these data are enormously challenging to work with in terms of making decisions about smoothing the data. Are the data sufficiently reliable when you're collecting mobile sensing data at the level of the feature over one day, three days, weeks. And once you start working with these data, you see the challenges that it ‑‑ that arise. And then finally, although we would argue in our group that detecting risk is incredibly challenging, it is really only half the equation. Meaning detecting risk will allow us to know when to intercede, but we also need to spend an equivalent amount of effort if not more in terms of developing strategies that actually engage an individual in a distressed state.
In closing, I just want to thank my amazing team at Columbia and my collaborators around the country and the funding agencies that support this work.
And with that, it gives me enormous pleasure to introduce Dr. Leslie Adams.
Measuring Social Contexts Associated with Suicide Intensity
LESLIE ADAMS: Thank you so much, Dr. Auerbach. And hi, everyone, thank you so much for attending, everyone.
Today I have the pleasure of talking about measuring social contexts associated with suicide intensity. I will share my screen now. There we go. All right. Okay.
So the social context that shapes suicide intensity, particularly my work focuses on marginalized populations, black and brown populations across the United States and abroad. And by social contexts, I'm referring to just broader environmental and societal factors. I will talk about, you know, poverty, social isolation, state sanctioned and community violence and also discrimination. These are the things that shape how people navigate their daily life, how people live, work and play. And it also influences their mental health.
And so, again, these factors don't just affect people in isolation, but in the context of suicide prevention it is really important to understand how we can capture this in real‑time and consider potential adaptive interventions to support these individuals in their most vulnerable moments.
So just some quick ‑‑ here we go ‑‑ unexplored factors related to social determinants and suicide prevention research. I just highlighted a few that I will be touching on throughout this presentation. First is socioeconomic disparities. So where people live and are housed have a number of factors related to their employment, their underemployment, financial stress, which is a major risk factor for mental distress and suicide.
Neighborhood context. The exposure to community violence. The resources or lack thereof related to healthcare, education and recreational and green space. And then everyday racial discrimination which myself and my team works on specifically and focus on primarily. For today, I will talk about the daily compounded instances of unfair experiences due to race. And that is considered everyday racial discrimination. It is important to think about and highlight these pieces because it does have an influence on mental well‑being, suicide risk and are potential areas of promise to leverage and focus on mitigating in a just‑in‑time adaptive intervention.
So just to give a quick overview of two different aspects that shape social determinative suicide for the population that I focus on, which are Black Americans, residential segregation and ongoing poverty has left Black Americans with the least desirable housing in the United States in some of the lowest resourced communities in America.
And so in addition to these high poverty rates, African Americans suffer from this concentration of poverty. So the figure on the left is from the Economic Policy Institute in 2012 that showed that nearly half of poor black children live in neighborhoods with concentrated poverty compared to their counterparts in other racial and ethnic groups.
This has since not really changed much. The poverty rate has gone down a bit for African Americans, but it still has maintained highest or higher than other groups as we can see on the figure to the right from the Economic Policy Institute ranging from 2013 to 2019. We still see that those overall rates are still high for Black Americans and for those under 18.
And so when we're thinking about youth interventions for those emerging adulthood and young adulthood, this is still a major, major area of concern and social context that shapes suicide for this group.
Again, thinking about another major social determinant which is employment and underemployment. From the Labor of Bureau Statistics we know as of this year in 2024 black men experienced a 5.3% unemployment rate compared to their white counterparts. And this is the highest compared to black women or Hispanic individuals, both men and women. And so we are still seeing very high unemployment rates, particularly during the pandemic but it has since leveled and gotten a little bit lower. But when we're looking at it relatively, we still see that being much more heightened and higher. Also underemployment compared to their racial and ethnic counterparts. So that is the social context of the work that I have been working on and shapes the work that I'm about to present today.
The major lingering questions that I have as I present our pilot work is just what can we do to enhance suicide and crisis intervention for those in these environments? And then how do these social determinants interact with factors such as proximal risk for suicide to exacerbate mental well‑being and outcomes.
These are the major guiding questions of the work that I'm going to present today from our research team that's called the GRACE Research Team. We are based in the Department of Mental Health at Johns Hopkins Bloomberg School of Public Health. And we broadly look at mental health disparities in black communities. We focus on suicide prevention and understanding the nature of suicide among black boys and men specifically as it is a very under explored area of focus in the suicide prevention world.
We use a number of methods. So I will show you some spatial epidemiology methods that my colleague Mia Campbell presented and worked on today. We also do community‑based work, mixed methods and qualitative work which I will end on. And broadly, we look at the understanding of how racialized experiences and social context really shape suicide and mental health outcomes.
So I will present today on a pilot grant that we have since concluded. It was conducted during the pandemic in 2021, but it looks at how social context and particularly daily racialized stressors influence suicide risk among young adult black men. Just to additional context, I always show the proportion of Black Americans that report, you know, using a smartphone. A lot of questions I get in my work is, is this even a viable method for this population. And the answer is a resounding yes. Close to 83% as of 2018, I'm sure the percentage is even more now, of Black Americans report using a smartphone according to the Pew Research Center. And this highlights a major opportunity to work with Black Americans in the group to broaden accessibility to suicide prevention interventions.
As a lot of people on this call and on this webinar know, a lot of our existing suicide intervention preventions are based in the healthcare system. And there are less community‑based interventions that have shown to be effective or even applied in these populations. And so thinking about broadening the accessibility of just‑in‑time adaptive interventions outside of the healthcare community where there still exists a number of racialized experiences and inadequate resources related to getting timely mental healthcare, this is incredibly important.
Our inclusion criteria for this study was black men ages 18 and up. We were targeting a young adult crowd of 18‑35. It is a psychiatric sample primarily, although we did include snowball sampling, but that didn't yield many participants. But we had people that were ‑‑ had received care for self‑injurious behavior, suicidal ideation or an attempt with regular access to a smartphone. We did exclude folks with cognitive deficits in an active psychosis just for nature of the pilot aspect of the study. But I am imploring, you know, many folks that are on this webinar and others to consider the inclusion of all of these populations because they are at most risk of suicide.
This is just kind of an overview of our process. We did use Ecological Momentary Assessment. So thank you to Doctors Bentley and Auerbach for giving a great overview of what Ecological Momentary Assessment is. But we had them conduct a baseline survey and then followed them for just seven days. Again, this is just a small snapshot of understanding what their suicide risk is.
We had them conduct random prompts throughout the day and an end of day daily diary where we assessed their racial discrimination measure that I will present in the next slide.
We also had some passive data that we assessed. GPS and their accelerometer. And then we had an exit interview to see how well we did and, again, some of the promises and pitfalls of this method were yielded in our qualitative and our exit interviews that are now published.
So broadly, we had about 10 participants that I will talk about today finished pieces of the study that I will present. They ranged from 18 to 34. They were a range of sexual orientations including bisexual, gay, straight, pansexual and questioning. Different marital statuses, but most were single.90% were single. Half were employed. And then most of them did have their high school diploma or another higher level of education.
And again, the aims of our study in addition to did it work, is it feasible or acceptable, is what is the relationship between the geographical location where people are positioned and some mental health outcomes related to suicide.
The data that we used in addition to our EMA surveys and the GPS data was overlaid onto Census Tract Data from 2010 and also the 2015 Area Deprivation Index Ranks.
So to go through our findings really quickly. First, on the left‑hand side to orient you, is the distribution of our Ecological Momentary Assessment responses. This is Maryland. And in the center is Baltimore and Baltimore City. And so you can see the different responses for the Census Tract where the brighter red is higher responses and the lower like orange and less like I guess yellowish colors indicates like fear responses.
I would say in addition to what Dr. Auerbach and Dr. Bentley mentioned, missingness is a major issue especially in populations that have social contexts with high deprivation, and they have many other things going on in their lives than to complete this survey. So that is what's shown on the left‑hand side.
On our right‑hand side, we've overlaid this with the 2015 Area Deprivation Index. You can see the higher gradients green, the more green that it is, the more deprivation, which is an overlay of income and education, and employment and other factors related to socioeconomic status. These dots on the left‑hand side are our PHQ2 measure of feeling down, depressed or hopeless. So you can see a lot of them are concentrated in the darker green areas of high deprivation of feeling down, depressed or hopeless more than half the days or several days compared to not at all. But a lot of this is clustered in the high dark green area compared to other areas of Baltimore City and surrounding areas of Maryland.
We also did a logistic regression to understand the associations between different proximal risk factors for suicide and where people are positioned in the time that they are completing their EMA survey.
Belongingness, a sense of closeness to others and feeling happier without me are three major proximal risk factors that people use to measure suicide risk as opposed to asking how are you feeling about suicide today. And so just quickly I will note that we had varying ranges of responses. But the first one to sense of belongingness among participants reveal that higher levels of deprivation were associated with lower odds of feeling of belongingness. So the more likely you are to be in an area of high deprivation, the more likely you are to say you don't belong and not feel belongingness. Closeness was a more nuanced and complex relationship. We found that there weren't many significant associations found there between deprivation and closeness. However, once we incorporated some predicters which I will talk in the next slide, we were able to see some significant predicters that interacted with closeness to inform their suicide risk. And then folks feeling happier without me.
Initially, you know, we were seeing that individuals residing in more deprived areas in the area of deprivation index were more likely to endorse that belief that people feel happier without me. Some significant predicters that were found in this regression was home ownership. So in areas where there was high ownership, we were able to see mitigation of that negative relationship between belongingness and area deprivation such that, you know, individuals living in higher rates of homeownerships were more likely to report a sense of belongingness.
Same with thinking about closeness. Home ownership had a major factor there, but also racial composition. Areas where there were higher proportions of non‑Hispanic white residents versus dealing with lower feelings of closeness. And then related to feeling happier without me. People with higher levels of home ownership and those who were younger were associated with lower odds of believing that people feel happier without them. Again, this is interesting and gives us immediate, you know, thoughts about how we can look at where people live, work and play associated with their suicide risk.
We again were focused on looking at social context. And for that we brought in everyday discrimination scale. How does discrimination influence, you know, this individual's belief of feeling close or feeling like they belong in this world that tethers them here. And so for this we used the everyday discrimination scale. This is a widely used scale to look at subtle more chronic forms of discrimination and it captures day‑to‑day experiences.
Here are nine of the items that we assessed in a binary way; yes or no. And this was assessed at the end of day. Here you can see again the map of Maryland and Baltimore City. But darker purple colors relate to a higher sum score meaning they were reporting more instances of everyday racial discrimination versus white or light blue.
And then overlaid with the Area of Deprivation Index, we can see a lot of the areas where people were reporting high discrimination on a daily basis were also coinciding with areas of high area deprivation. Not always. You do see some areas like in this bottom corner or bottom in the middle where there is high everyday discrimination but it's in a middle range Area of Deprivation Index. But you are starting to see some patterns here of how deprivation and discrimination may coincide and influence suicide risk.
I want to close with a quote from one of our participants. Again, as part of our study we gave them the opportunity to respond with driven voice recordings of their daily environments. This participant said, "I'm about to leave the house and I always get real anxious about just leaving the house and having to get in my car and go anywhere." This person ended up spending a lot of time at home and wasn't quite mobile in their everyday social context and environment.
And this simple statement speaks to volumes about the everyday anxiety and stress that many folks face that stem from their social environment, their lack of safety in these surroundings. And it's really a reminder that these mental health elements to capture aren't just static but it really informs how people navigate and move around their social environment. This person did not feel safe leaving their home and so they decided to spend most of their time there.
To end and bring it back just to the goal of today's talk on JITAIs and capturing social context, I also want to just quickly highlight some promises and pitfalls. A lot of these have already been stated so I will just go through them broadly. But the promise of these approaches and adaptive interventions is to get some real‑time adaptive support when people are in high stress moments. Again, if we are tracking this from an active standpoint where they have to respond versus passively, we get varying results and successes. But there is an opportunity there with JITAIs to leverage this.
Context‑aware. Again, leveraging what we know about the Area of Deprivation Index with areas of high social deprivation and disadvantage and areas of high racialized stress. We can think about when to deploy continuously tailored interventions. And again, this is a great scalable low‑cost solution, especially for leveraging something everyone has which is a smartphone to think about delivering interventions in real time.
The challenges from our experiences at the GRACE Lab and from expanding this work is definitely the data quality is only as good as what we are receiving. And so the active aspects of data using EMA, missingness did indicate some aspects of poor mental mood. And we have that published in our most recent study. So sometimes nonresponse is a response, and we were trying to track that.
Representation, again, this is for young adult black men. A very small sample, but the challenges of recruitment and retaining and enhancing engagement and trust and constant surveillance is something to be discussed and worked through as we continue to explore JITAIs. And again, the limited availability of contextually relevant intervention designs. There's a number of suicide prevention interventions out there. Again, a lot of them are centered in healthcare system. But a lot of the community‑based work and other work that I have been exploring as part of my research has not really focused on Black Americans specifically, of people that have been socially disadvantaged and may not have constant contact with the healthcare environment or that maybe can afford psychiatric care. So that is important to consider as we think about where we are funneling people to receive, you know, when they are receiving, you know, high stress.
Some takeaways is that we can leverage spatial patterns in JITAIs to inform areas of intervention. It is complex so we do need to think about all the different inputs of demographics, of qualitative narratives, of understanding proximal suicide prevention risk such as social connectedness and happiness, not just the basic suicide questionnaires of are you feeling this way, yes or no, what is the intensity on the scale. We have to think more broader about reasons for living and other types of measures that people will feel more comfortable answering so that we can get more detailed granular data.
I would just pause and end on acknowledging everyone that has participated in this work. I want to highlight Mia Campbell who did all of the spatial work that I presented today and will soon be on the market in the coming years. She's a graduate student.
And with that, I will pass it off to Dr. Shani. Thank you so much.
Methodological Considerations for JITAI Studies
INBAL BILLIE NAHUM‑SHANI: Thank you, Leslie, for a great talk. All right. Hi, everybody, a pleasure to be here today.
I'm going to start by thanking all of my amazing collaborators. And also NIDA and NIH for funding this work.
All of the examples that I'm going to show you today are actually in the area of substance use which is my main area of research. But in the next talk, Dr. Ewa Czyz will show you examples in the area of suicide prevention that lead to our joint work. In the next 15 minutes, I'm going to talk about five common misconceptions related to JITAIs and digital interventions more broadly. And some of these were mentioned in the previous talks but I'm going to elaborate on them.
So let's start with the first misconception. And it is that JITAIs and adaptive interventions are the same. And to understand why this is a misconception we first need to define the concept of adaptation. I know Kate already talked about this, but let's go over this again. Adaptation is formally defined as the use of dynamic information about the person to decide whether and how to intervene.
And why do we need adaptation? We need adaptation because we want to be able to address not only the unique but also the changing needs of people as they progress over time. Now what we call standard adaptive interventions or just adaptive interventions, they are motivated to address conditions that change relatively slowly. And slowly means every few weeks or every few months. That's why the adaptation in standard adaptive interventions happens on a slow time scale.
And these interventions, they typically guide the adaptation of human‑delivered components because human‑delivered components such as coaching sessions or therapy sessions, they can be adapted typically on a slow time scale.
To make this concrete, I want to show you this example. This is an adaptive intervention for youth who visit the emergency department loosely based on work by Maureen Walton. And the goal here is to reduce their substance use.
This intervention starts with a single session in the emergency department plus a digital intervention that delivers tailored messages on a daily basis. At week four, they assess the participant's response status. If the participant self‑reports they used drugs at week four, they are classified as non‑responders and they transition to the second phase where they receive remote health coaching. This is human‑delivered coaching.
Otherwise, they are classified as responders, and they continue with the same initial intervention. So I want you to notice what is going on here. There's an adaptation process. It's a process where we use dynamic information about the person's response status in week four to decide whether or not to add coaching. And the adaptation process here happens on a slow time scale, it happens at week four. It is because the goal is to address early signs of nonresponse to this minimal support which is expected to unfold over four weeks.
Now let's talk about JITAIs. JITAIs also include adaptation. They use dynamic information about the person to decide whether and how to intervene. But in JITAIs the adaptation happens on a much faster time scale. Every few days, hours, minutes, seconds. Why? Because the goal is to address conditions that change relatively fast. And these interventions, JITAI, they typically guide the patient to digital components because with digital components we have the capacity to adapt interventions on a fast time scale.
As an example, I want you to think about the digital intervention from the previous example, okay. And suppose that as part of this digital intervention, every day we collect information about the person's state in context with EMAs and with sensors. And every day in the evening if this combined information indicates that the person experiences high risk for next‑day substance use, we trigger a message with a protected believable strategy. Otherwise, we do nothing. So now this we adapt because we use information, dynamic information about the person's risk status to decide whether or not to send the message. But now the adaptation happens on a fast time scale, every day. Why? Because the goal here is to address signs of risk for next day substance use which are expected to unfold on the daily basis. So JITAIs and adaptive interventions, they are not the same. They both include adaptation, but the adaptation happens on different time scales.
The second misconception is that a JITAI is an experimental design. And I'm really happy that Kate made the difference between an experimental design and an intervention design. So let's dive deep into the difference between these two concepts.
An intervention design is the answer to the question how should practitioners, therapists, coaches, the digital technology, how should they deliver the intervention in practice? What guidelines, what procedures they should follow. That is an intervention design.
So what is an experimental design? An experimental design is the answer to the question how should we investigators, how should we systematically manipulate independent variables so that we can answer scientific questions that are of interest to us? In experimental designs, we have randomizations, why? Because we want to be able to answer causal questions.
In intervention designs we have no randomizations because the goal is not to answer scientific questions. The goal is to guide the delivery of interventions in practice. And when you guide the delivery of interventions in practice you are not randomizing, you are just giving practitioners a protocol.
So if you look at the example of adaptive intervention and JITAI that I showed you before, you will see that there are no randomizations here. Because these are intervention designs, not experimental designs. All right.
The third misconception is that MRTs are always suitable for developing digital interventions, they are the gold standard for developing digital interventions.
So why is this a misconception? It's a misconception because the design that we select should be guided by our scientific questions. For example, let's go back to the adaptive intervention that I showed you before. And suppose that in developing this adaptive intervention, me the investigator, I have two scientific questions.
One is how to initiate this adaptive intervention. You see here the assumption is that I should start with this relatively minimal form of support and maybe I should bring some coaching right off the bat. So would it be beneficial to start this intervention with or without coaching?
The second question that I have is how to best address the needs of non‑responders. You see the assumption here that I should step up with coaching for non‑responders. But maybe it would be better to simply continue with the same initial intervention for non‑responders.
How do I answer these two questions? In this case I'm not going to need the MRT. I'm going to need the SMART. So Kate already defined the SMART. In this case what I'm going to do is randomize participants initially to the intervention with or without coaching. Why? Because I have a scientific question about how to initiate the intervention, with or without coaching. And randomizations are here to help us answer scientific questions so if I don't know, I randomize.
And I also don't know what to do for non‑responders at week four. Either step up or continue? Because I have a scientific question, instead of stepping up for non‑responders, I randomize: non‑responders at week four to either step up or continue and responders continue with the initial intervention because notice I don't have any scientific questions about what to do for responders, so I'm not randomizing them. If I had a scientific question about what to do for responders, I could have randomized them as well. But in this case, I don't.
So I want you to notice how the randomizations are connected to my scientific questions. Okay. And also notice that I'm randomizing here on the small‑time scale. Initially and in week four. In SMART, we sequentially randomize on a slow time scale. It's because we are trying to answer scientific questions about how to best construct adaptive intervention in which the adaptation happens on a slow time scale. So I want you to notice the connection between how fast I want to adapt and how fast I need to randomize in the trial design to answer questions about how I want to construct the intervention.
All right. So now let's go back to the JITAI that I showed you before.
And suppose that here as well I have two scientific questions that prevent me from developing the high‑quality JITAI. The first question is about the overall effectiveness of delivering a message with a protected behavioral strategy. I don't know if sending such a message will, indeed, be beneficial on average in reducing next day substance use. And suppose I also don't know under what conditions it would be most beneficial. Here the assumption is that I should trigger a message when risk is high. But maybe I should when risk is moderate to high. I don't know. But what I do know is that in this case, the MRT, the micro‑randomized trial can be useful.
So what I can do is conduct an MRT where every day I still collect data about the person's state and context with EMAs and with sensors. But instead of triggering the message when risk is high because I don't know if I should, I have scientific questions, I randomize to message versus no message. And I do this every day. So I'm randomizing sequentially and that's what MRTs do. They allow you to randomize participants sequentially rapidly on the fast time scale. Every day in this case.
It is because we are trying to answer questions about how to best construct a JITAI in which the adaptation happens on a fast time scale. So interestingly, these days many investigators want to combine an adaptive intervention and a JITAI. And the reason they want to do this is because they want to integrate human‑delivered and digital components. And these components, however, that are adapted on multiple time scales, slow and fast. In other words, they want to develop what we call multimodality adaptive interventions. Or sometimes we call them MADIs because we love acronyms. And anyway, to construct these multimodality adaptive interventions, or MADIs, investigators face many scientific questions, especially about how well the human delivered, and the digital components work together.
To answer these questions, they need a design that allows them to randomize participants sequentially to human‑delivered components and to digital components on multiple time scales, slow and fast, simultaneously. And that's what hybrid designs do for them. For example, this is a relatively simple hybrid design. It combines the SMART that I showed you before with the MRT that I showed you before. So notice, at the same time that I'm randomizing on a slow time scale to the human‑delivered components, every day I'm randomizing participants to the digital components. So I'm randomizing on two‑time scales, slow and fast, because I want to develop an intervention in which human‑delivered and digital components are integrated and adapted on multiple time scales, slow and fast.
So with that in mind, to decide which design we should use, the first thing we need to do is specify what kind of intervention we want to develop. What is it? Is it an adaptive intervention? Is it a JITAI? Is it a MADI or something else?
And then we need to specify our scientific questions and then we need to let the scientific questions drive the design and not the other way around.
All right. The fourth misconception is that JITAIs should always intervene or should be always motivated to intervene when people experience a state of vulnerability. And this is something that came up multiple times in the previous talks. So what is a state of vulnerability? It's a state of high risk for an adverse proximal outcome. And many investigators, myself included, we want to design interventions where we identify, fix the vulnerabilities when they happen in real life. And we intervene when people experience the state of vulnerability because we want to break the link between the state of vulnerability and the adverse proximal outcome. We want to prevent the adverse proximal outcome from happening. But what is the problem with that?
The problem is that in many cases participants may not be receptive to a mobile‑based prompt, to a digital intervention when they experience a state of vulnerability. They may not be able to receive, process, and utilize the intervention that you are trying to deliver.
The best example I can give you is from a study of smokers attempting to quit, Sense2Stop, led by Bonnie Spring. And in this study, this is a micro‑randomized trial, by the way. We found that when smokers attempting to quit experienced stress, sending a prompt to encourage them to engage in a stress management activity on the mobile device did not reduce the likelihood of experienced stress. It increased the likelihood of experiencing stress in the next two hours compared to not sending a prompt. And we think that this has to do with people experiencing stress being cognitively overloaded. So when they are stressed, they are not receptive to the intervention in which case the intervention can be more harmful than beneficial.
So whether or not we should intervene when people experience a state of vulnerability is an open scientific question that we should investigate as opposed to make assumptions about.
All right. The last misconception is that JITAIs always promote intervention engagement. And the reason this is a misconception is because it is not clear from the statement. Engagement with what? In this manuscript, we define engagement as the extent that people invest energy in a specific stimulus or task. And when you define engagement in this way, you realize that to conceptualize and to measure engagement we need to clearly specify what we are talking about. What are the stimuli and the tasks that we are talking about?
The best example I can give you here is again in the area of smoking cessation. This is another MRT. Different from sense to start, this one is called MARS, the Mobile Assistance for Regulating Smoking led by Dave Wetter and his team at the Center for HOPE at Utah.
And in this case, we're trying to develop a JITAI that does the following, okay. This is for smokers attempting to quit after their quit date. Multiple times a day we trigger two questions asking about cigarette vulnerability and negative affect. We use the answers to these questions to tailor a prompt that recommends that they engage in a brief self‑regulatory activity. This is something they can do without the phone line. Take a deep breath and count to three. Be mindful in the moment, take a short walk. And the goal is to get them to engage in this brief self‑regulatory activity in the next hour as a way to regulate craving and stress.
So I want you to notice what we are doing here. We are trying to promote engagement with a particular task with the health behavior, the brief self‑regulatory strategy. But to do that, we are actually expecting participants as part of the JITAI to engage in a task, complete the survey and the stimulus, the prompt. Now, this is a very simple example. And many cases you see JITAIs that include many tasks and stimuli that are interconnected that participants are expected to engage with in order to increase engagement with the health behavior.
The point I'm trying to make here is that JITAIs can be designed to increase engagement in health behaviors. But to do that, we need to map out all the stimulus tasks that the JITAIs include. And need to design in a way that is engaging. So to conclude, JITAIs have tons of potential but to use them we need to understand the boundary conditions and when they are useful and not useful. And we need to conduct research to optimize them. We need to conduct research to answer scientific questions about how to best construct JITAIs that are effective and resource efficient.
And with that, I'm going to hand it over to Dr. Ewa Czyz.
Integrating JITAI into Healthcare Systems
EWA CZYZ: All right. Give me one second to queue up my slide. Okay. I think this is all set. Hi, everyone.
I'm Ewa Czyz. I'm at the University of Michigan. And my job today will be to describe three studies as examples that illustrate how adaptive designs can be used to optimize brief interventions for individuals at risk for suicide in a treatment setting.
The three studies rely on different experimental designs. And I'm glad Dr. Nahum‑Shani and Dr. Bentley already covered this topic a bit.
So the studies I'll describe will use a SMART or MRT experimental design to inform adaptive intervention. And the commonality between that, our aim is to enhance and extend the standard of care in services that are used in healthcare settings. However, we do have different populations and slightly different settings. When describing each study I will try to highlight of the considerations that went into the design as well as some lessons learned. All right. I know there is quite a bit on this slide. I will try to kind of walk us through it.
This is a SMART study. And it focuses on psychiatrically hospitalized adolescents. Here ‑‑ and this study is being conducted across three sites ‑‑ the University of Michigan, Cincinnati Children's and Henry Ford Health. And I want to acknowledge the psych PIs who are involved in this work. Here what we're doing is you can see from the very beginning all youth are provided with a safety planning intervention. It's highlighted here in blue. This is happening during hospitalization. And then teens are randomized to different intervention strategies delivered after discharge. First, half of the youth are initially randomized to daily supportive text messages. These are offered for the first four weeks after discharge. They are completely automated.
And then we are tracking how the youth are doing with daily electronic check‑ins. And this is the gray box showing monitoring. And using these daily check‑ins, if you will, we are trying to determine whether youth are showing early signs of nonresponse, meaning they are showing signs that they might be moving towards a crisis based on a pre‑defined threshold or tiering variable that we are deriving from the daily surveys.
And we're doing this twice. So we are checking the response status at the end of the first week. And then again, we are rechecking whether they are responders or non‑responders at the end of the second week following discharge.
For youth who are flagged as being non‑responders and presumed to benefit from additional support, we are re‑randomizing these teens to receive one of two augmentation strategies. So this is either booster phone calls with a counselor or a synchronous communication with a counselor using a web‑based portal which is similar to a patient portal. So it's supposed to mimic the communication available in healthcare systems.
Youth who are classified as responders, meaning they seem to be doing well within those first two weeks after discharge we are not providing further intervention.
And I wanted to sort of show how the SMART experimental design that you are seeing on the left is mapping onto the adaptive intervention meaning treatment that Dr. Nahum‑Shani sort of differentiated and how that’s kind of corresponding to one another. You can see in this table that the SMART design includes four embedded adaptive interventions. So each starts with a safety plan intervention and then includes different sequences of post discharge support depending on whether someone is a responder or non‑responder.
And our goal for the SMART trial is we are really interested in understanding what combination of supportive interventions following the discharge are best and also for whom.
So, I would like to highlight just some considerations related to the SMART being carried out with the clinical population. So first, consistent with clinical care, as I mentioned all youth receive a safety planning intervention. We are not really manipulating that at the very beginning. And that is because safety planning is considered center of care. That is what they are receiving in the healthcare setting. And really our scientific and clinical questions center more on optimizing post discharge support which as we know is often recommended but not necessarily delivered or standardized in this setting. So that is what this particular SMART focuses on is the optimization of post discharge support.
Second, and this is really by design. The first line of post discharge support begins with automated texting which is the less resource intensive approach, and it may be more feasible for a healthcare system to implement for most youth if it turns out to be actually an effective strategy.
We are reserving support that is more resource intensive for youth showing early signs of nonresponse in the form of a booster calls or a synchronous form of communication.
Finally, I would like to highlight some of the challenges that could come up in the clinical setting has to do with technology itself which many adaptive interventions and just‑in‑time interventions rely on.
So for example, there may be specific permissions or specific requirements that are needed to use a technology supported intervention or software in a clinical setting. And in this study, we actually opted to use something that is more simple. So we opted to use text messaging and portal‑like communication that can easily be integrated in a healthcare setting given that these already are being used routinely by healthcare system for things like appointments and reminders.
However, in the case of a study ‑‑ and this is still a research study ‑‑ we are not actually relying on existing patient portals, for example, even though they are being used. And instead to help us get our study off the ground, our team developed a customized platform that is used to deliver the portal communication and text‑based intervention components. And if those prove to be effective and helpful, we can easily integrate them into the real‑world practice.
All right. So let me move to the second study. Here, this is a randomized control trial that has an embedded micro‑randomized trial. Our goal is to pilot an intervention for parents of adolescents who are seeking emergency department services for suicide risk concerns.
And the goal here is to learn about the feasibility acceptability of the intervention as well as study procedure which includes the MRT procedures.
Participants, which in this case are caregivers, parents were randomized to a control group or to a six‑week texting intervention that was offered to parents. And that included one or two texting components. You can see that both intervention arms. And that is ‑‑ those are the two top boxes in blue ‑‑ were comprised of a component where parents received something we are calling adolescent‑centered texts. And those were intended to encourage parental engagement in suicidal prevention activities after discharge from the ED. Things like means restriction. Monitoring of suicide warning signs, providing support.
And one of the intervention groups also included something we're calling parent‑centered texts and those are highlighted in green where the intention was to improve parent's own well‑being and stress level. And this component, this parent‑centered component included an MRT where parents were randomized twice daily, once in the morning and one in the evening to receive or not receive a parent‑centered text.
The idea really here which you can see in this figure on the right is that we sort of conceptualized the parent‑centered component that had the MRT, specifically these two, to optimize parent‑directed support for caregivers with the intention to lower proximal stress, so stress occurring on a daily basis, improve their effective well‑being as a way to increase parent's capacity or ability to implement more effective support for the youth. So actually be able to implement some of these adolescent‑centered recommendations.
As we can kind of glean from different literature but also probably your own life when you are very stressed or not in a good mood you are less likely to be receptive to implement recommendations or to do things maybe in a very effective way. So the idea is if we can reduce parent stress and understand sort of when and how these prompts meant for parent's well‑being should be delivered, we can ultimately help parents better support their youth and that would be related to improving youth outcomes ultimately.
So what about some design considerations? Well, first ‑‑ and this is not just specific to adaptive or just‑in‑time adaptive interventions, but something that was very helpful for us when coming up with this design is obtaining significant input from parent stakeholders as well as providers and suicide prevention experts. And actually, we originally were not planning to incorporate an intervention component that focused on parent's own well‑being. This came up directly from parents' feedback and parents feeling like this is an important focus of the invention.
So we ended up adding this component following input from parents themselves.
Caregivers were also involved in vetting the content. So in this case, literally each message in this intervention was vetted by parents themselves. And I would just emphasize how critical stakeholder engagement can be in shaping the intervention from the beginning, which again is not just specific to adaptive or just‑in‑time adaptive interventions.
Another important consideration that I would like to highlight for the study has to do with intervention sustainability and extent of personalization.
So regarding sustainability, many of you know that EDs are already quite busy serving high volumes of pediatric patients who are coming in with suicide risk concerns. And for this reason, for this parent‑facing intervention we design it so that it could be fully automating. And using texting because given that it is relatively accessible, relatively low cost and can be quite scalable if it in fact proves to be an effective strategy.
With regard to tailoring and informing a future just‑in‑time adaptive intervention, we decided to embed this MRT again in the component that is focusing on parental well‑being given that we conceptualize parental stress as something that is more variable and ultimately linked with parents' capacity to act on or implement some of these recommendations that we are providing as part of the adolescent‑centered messages.
And here there were some choices about how rapidly, how frequently should parent‑directed support be adapted.
We ultimately decided on two segments of the day when parents are more likely to interact with their youth. So we decided to do this in the morning and in the evening. However, truthfully, the extent to which timing and frequency of tailoring and how much is needed or really necessary is an important issue for any context or for any population.
And that brings me to the very last study that I'm going to describe today. This is a pilot RCT that I'm co‑leading with Dr. Adam Horwitz who is a really close collaborator and a fellow faculty member at the University of Michigan. This pilot focuses on adults. Unlike the two prior studies, adult service for suicide who are seeking emergency department services. This intervention combines safety planning, standard of care approach. It is used for individuals that are at risk of suicide who are discharged from EDs.
This component will be delivered in electronic format in the ED to ease the ‑‑ just the distribution of that safety plan. And it will be combined with a text‑based support program that is provided across four weeks following discharge. You see that the MRT portion is highlighted here in this sort of like orangey‑yellowish color and the idea here is that we are delivering the text‑based support program up to twice daily.
And here we are trying to understand how much should we personalize prompts that focus on coping and safety plan use. We are using different levels of personalization based on the initial electronic safety plan that was completed in the emergency department and also dynamic personalized feedback that is referencing participants' recent functioning reported via surveys that we are also collecting at the same time. Our goal here is to utilize the MRT to really explicitly answer the question regarding not only whether and when messages can be beneficial but also what type of message with regard to degree of personalization and how those can impact functioning in day‑to‑day context but also suicide‑related outcomes.
So in terms of some considerations for this setting which can be baked into the experimental design sort of the way that Dr. Horwitz and I are doing with this MRT is ‑‑ and this is something that I think both Dr. Nahum‑Shani and Dr. Bentley already alluded to is really how complex the adaptive intervention or just‑in‑time adaptive intervention should be. More highly personalized interventions ultimately do require more effort from people who are receiving them because they have to respond to maybe more frequent prompts.
But also from a health system perspective because more complex interventions require more effort to develop, to program and also to maintain. So it may be that more simpler messages or interventions could be as beneficial or as effective. Or maybe that truly for the suicide risk concerns we really need to be very complex and have very complicated interventions. So that is really an open question and something that can be baked into the design.
I previously highlighted the importance of obtaining feedback from relevant stakeholders. And related to this issue is how the adaptive or just‑in‑time adaptive interventions might fight within the existing workflow in the healthcare setting. For example, the electronic safety plan and texting program I described would ideally be seamlessly integrated within the current infrastructure in the ED and be connected to medical charts, for example. We've started having these discussions with information technology stakeholders in our health system to learn how this could be supported.
However, the phase of this project, which is a three‑year pilot, we don't have enough time to make this happen necessarily. And so we are working on developing customized intervention platforms for these intervention components that can have the right features and the right elements that will facilitate their eventual integration if and when they prove to be helpful. So, in other words, sometimes we need workarounds that are necessary to carry out the study while informing plans for future integration. All right. I'm going to zoom through this quickly.
And to summarize, and this has been echoed in others' presentations as well. It is really critical to consider how the experimental design we are thinking about, whether it is a SMART or MRT or hybrid design, how that fits the scientific questions and the clinical issues for a given clinical population or a setting. For example, what makes sense practically, clinically with regard to the timing, the frequency of adaptations, how rapidly not just can but should these interventions be tailored for a given population and also what intervention options are suitable for a given context.
Second, we may encounter practical challenges in developing and evaluating adaptive interventions or just‑in‑time interventions that rely on use of technology in a clinical setting. And I mentioned this with the last couple of studies may be helpful to think ahead how a given digital tool or software may fit or be approved for use in a healthcare setting. In some places we have to be prepared that integrating a new tool or new software can take some time. And there are many layers of approval that we have to be ready to work through.
In addition, may be helpful and necessary, in fact, to work with information technology experts and decision makers in this space so that we can learn about technical limitations, technical requirements that are need to eventually to embed the technology needed to deliver adaptive or just‑in‑time adaptive interventions in these real‑world clinical settings.
And finally, just wanted to echo what has been said so in previous presentations as well is that it is important to partner with stakeholders in clinical settings from patients to providers to decision makers to help us carefully vet the interventions we are developing or evaluating, the various processes as well as our assumptions of how something might work in the clinical setting to ultimately problem solve around challenges and eventually develop more impactful interventions.
And just wanted to give a shoutout to the many collaborators involved in this work as well as research funding that is helping these studies keep going. Thank you so much.
Moderated Discussion
MARY ROONEY: All right. Hello, everyone. I'm Mary Rooney, Associate Director for Prevention Research here at NIMH. And I'm going to be moderating a discussion this afternoon with some of ‑‑ four discussants that we have here today. And I just want to start off by thanking everyone, each of the presenters here this afternoon for such outstanding presentations. Very engaging, interesting, thought provoking and really couldn't ask for more. So thank you so much.
And I will just introduce our presenters here to start off. Their bios are available on the webinar or ‑‑ Eventbrite webinar page. You can go there and check that out for more information.
But with us we have Matt Nock from Harvard University, Evan Kleiman from Rutgers, Nick Allen from University of Oregon, and Walter Dempsey from University of Michigan.
Welcome, everyone.
So I wanted to kind of start off with a question related to something that came up in the Q&A.
It kind of also builds off of the last presentation that we just heard. And it is related to different contexts that might be appropriate for just‑in‑time adaptive interventions like people come up with wraparound services for youth or any services that might include a 24/7 crisis line or intervention component.
And so while this intervention research is really still in the relatively early stages, I think it would be helpful to hear what you all see as some of the hurdles that we need to overcome in order to embed these interventions in real world practice and community settings. And also, what are some things that we can do to design these studies now in the earlier stages to proactively address the challenges and hopefully minimize like the research practice gap in the future.
Go ahead, Matt.
MATTHEW NOCK: Just jump in?
MARY ROONEY: Yes.
MATTHEW NOCK: Great, thanks for the question.
I think these approaches are remarkably appropriate for a really wide range of contexts. And I appreciate that you asked some of the older folks here to be the discussants rather than the presenters; and the benefit of that is we've been around for a little longer.
We know that for decades the interventions that have been available for suicide are weekly or monthly meetings with people. And we know that suicidal thoughts and suicide risk doesn't wait for those timelines. And so now we've got information on the people at all of the in‑between times, and we've got the ability to intervene at those times as well.
And that's relevant for outpatient care. It's relevant for inpatient care, for residential care, for wraparound services. You know, suicidal thoughts and risk is ebbing and flowing constantly, you know, from smartphone monitoring studies. And so I think really important to have this kind of coverage for a range of contexts.
In terms of challenges ‑‑ and I will say just ‑‑ I'll spend 30 seconds and then pass it on. We need the kind of research that NIMH has been funding and I think we need a lot more of it. And we need larger studies. You know, if suicide attempts is what we are trying to predict, still a pretty rare outcome.10% of people who pass through a psychiatric emergency stay in the next month, 20% percent in the next six months. To have enough events to predict and intervene, we need samples in the hundreds or in the thousands. So having results on large samples that replicate is going to be really important for scaling these things. Especially given of the type of outcome we're trying to predict.
It's so important to get this right so we want to make sure we have got really good data on prediction and intervention. Over.
MARY ROONEY: Nick?
NICK ALLEN: Yeah. As you know, I'm very excited about this work and have been collaborating in it.
But I also as part of my role we do a lot of user‑centered design work with clinicians and with business gatekeepers in healthcare services.
And one of the things that we're very regularly hearing from clinicians is that there is a real tension here between wanting to know more about someone's risk status; and being very concerned that knowing more about the risk status will put you in a position where you have actionable information, but you are not able to act on at that time.
And that has both implications for clinical care but also medical‑legal implications as well. And so this concept of ‑‑ some people have used the term ubiquitous liability, the idea that if you're collecting continuous data or intensive longitudinal data in some way on someone's status. And if that data might have some implications for their risk. And then you are a clinician who is caring for that person, what are the responsibilities for responding to that data and what time period?
So we really often need to think about different contexts where sometimes this can be well managed because there is wraparound and 24‑hour services that can manage that. But there are many, many others that can't. For example, something like 75% of behavioral health clinicians in the community work in practices of four people or less. And for those kinds of services, the ability to have that wraparound care is really not practical.
So I think that as we develop these very important methodologies and all of the exciting work we have seen today, we also need to be thinking about how are we going to ultimately implement them in ways that are acceptable not only to patients but also to clinicians and to healthcare services in ways that manage those risks.
MARY ROONEY: And Evan?
EVAN KLEIMAN: Yeah, I guess I agree with what everyone said so far.
I think the thing that I'll add, too, is really taking a long look at who is going to pay for this stuff once the study is over. I think, you know, we oftentimes think about implementation as a problem for like whatever the third step is like towards the end. That's true in many ways.
But a lot of times we need to also think from the beginning like if this thing we are doing, like if it works out, we have this final product, like will this actually work in our current healthcare system?
Which, of course, we should probably change. But that is not necessarily something in our, you know, sphere of influence. We need to figure out from the beginning if a designed intervention if successful could be implemented in this way that doesn't cost money. Or if it does cost money will actually be revenue generating, things like that. It feels icky as researchers to think about that, but it's sort of the reality of how we can implement this.
WALTER DEMPSEY: I just have one final thing because I know we have more questions.
I think one of the things, I know we've been focused on implementation. When I see a lot of just‑in‑time adaptive interventions that are effective, they are often effective for a particular population. But then you might move to a different population, and they may not be as effective. And I think a lot more work on how to adapt and export those JITAIs to other populations is something that's missing.
And then as the statistician of the four of us, I guess I'll say one thing on the more statistical implementation technical side which is a lot of people get very excited about the desire to use machine learning algorithms to exactly know when people are at risk. And then use machine learning algorithms to decide exactly what treatment to provide when they are in those moments.
And robustness and sensitivity and sort of baked in potential issues with those algorithms sometimes go under the radar. Both just from a system design standpoint of can the system actually handle all of that. And also on the other side is what are sort of the baked in equities that might be coming from using these ML predictive or ML treatment rules as opposed to having humans in the loop.
MARY ROONEY: Yeah, excellent points, all of those. I think all of you highlight some of the challenges and things that we should be thinking about at the earlier stage down the line. I know people are thinking about so that is great.
We have another question. This one was related to Billie's remarks about when do you engage, when is the right time to engage? And is it when people are most vulnerable or in a crisis point? Or earlier, you know, when people aren't in crisis and may be more receptive? And how do you identify those points?
And kind of what is the process even from a research standpoint that you go through to determine when the right point is? Also, for an individual, right? Independent, not just on the whole. And so I'm interested in hearing our discussants' thoughts on this and any other panelist who may be off camera but want to chime in at any time should feel free.
Matt, go ahead.
MATTHEW NOCK: Yeah, we've got decades of research on learning to guide us. And to use a common colloquial example, it is just like learning an instrument or learning a sport. You practice under cooler, calmer conditions. If you're shooting a free‑throw in basketball, you don't practice it during the game; you practice outside the game when ‑‑ and, you know, you get the skill down so that when you are in a high‑stress situation you can perform it well.
And the same is true here, to Billie's point. We don't want to teach people a skill or only intervene when people are under their highest level of risk. We want to develop that at other times and then work toward that. So we don't ‑‑ you know, a good thing is we don't have to start from zero here or square one. We have got decades of research on basic psychological science to build on and bring to bear to these ‑‑ to these approaches.
MARY ROONEY: Great point, right. So while the approach, the technology may be new, what we know about human learning hasn't changed; right?
MATTHEW NOCK: Right.
MARY ROONEY: It's just we're applying it in a different way. Definitely. Excellent point.
Anyone else? Walter?
WALTER DEMPSEY: I'll just make a very small point, which is I think a lot of people when they hear JITAIs think the JITAI means a specific adaptive intervention, JITAI is trying to do everything, right, it is both trying to figure out how to balance between states of vulnerability, states of receptivity.
But a lot of the time you might have an intervention package with multiple intervention components that are all different JITAIs focused on different things. So you may have one that's focused on in states of vulnerability, high vulnerability, what do we do to help them?
When we're looking at prevention, that might be a different JITAI component. And then how do we have another one that might be working on engagement.
And so I think often when I see people try and do too much with one component, it is that they really want to build a package of multiple components. And that is an easier way to sort of think about it.
MATTHEW NOCK: Can I just correct one thing Walter said? I think when most people think of JITAIs, they think of Yoda or Obi‑Wan Kenobi.
MARY ROONEY: And if only we could package them. If only we could package them.
MATTHEW NOCK: Otherwise, I agree.
MARY ROONEY: I know. No, that's a great point. Well, speaking of engagement, actually I think with any digital intervention, right, or nonhuman‑driven intervention, engagement can be a challenge, right?
What we see often is there is this engagement early on, right? And then there is either fatigue around engagement in the intervention or just a drop‑off, right, that happens. With just‑in‑time adaptive interventions, what are your thoughts on where we are with sustained engagement with these when we are trying to, you know, prevent suicide in the long‑term in any given individual. That last longer than a week or two in terms of engagement so.
NICK ALLEN: Well, I think one of the findings that I believe is a theme that has come through in the literature is that having some kind of human touch in the intervention somewhere. And it doesn't need to be very heavy.
So sometimes just the fact that a human was involved in your onboarding can actually be a factor in engagement ongoing. You can also have very light touch forms of human intervention such as text‑based interactions. Things like that, that can really quite dramatically increase engagement across the usage.
One of the things that fascinates me is that I have spent quite a few years being pretty skeptical of what we might achieve with chat bots, but of course the developments that are occurring in that field and the large language models and the sophistication that they are bringing to that sort of technology and the other thing that people might even be developing relationships with chat bots, it will be interesting to see what the relationship of those very scalable technologies are to actually bring that human touch into the intervention so that there can be an enhancement in engagement.
MARY ROONEY: Great points.
STEPHEN O'CONNOR: I see Billie has got a hand up.
MARY ROONEY: Yeah. Billie, want to go ahead?
INBAL BILLIE NAHUM‑SHANI: Yeah, is it okay to chime in? I wasn't sure.
MARY ROONEY: Yes, please.
INBAL BILLIE NAHUM‑SHANI: So I completely agree with Nick. The only thing I want to add, and this relates to something in the chat.
One of the mistakes that people make when they design the JITAI, they are thinking that they could, or they should pay participants for completing assessments used to tailor and make decisions in real time.
And that is a problem because yes, it can increase engagement. Obviously, if you pay people ‑‑ well, not always, but in many cases it does. But what it does is it increases the cost of the intervention. It means that when you roll out the intervention in the real world you have to include the incentives in order to get people to engage and benefit from the intervention. So that makes the intervention more costly and less scalable. So that's just something to think about.
MARY ROONEY: Yeah, definitely. Matt, did you have something?
MATTHEW NOCK: Yeah, I think this is a huge, huge issue. And it's an issue with virtually all smartphone app‑based things. Not just interventions.
Apps in general. You know, we all use just a small handful and then we shift over time in what we use and how engaged we are. So this is big science and big business figuring out how to get people to engage in ways that are helpful and not harmful and how you sustain it over time.
We are not alone in trying to think it through and get it worked out. It is a huge, huge piece of the puzzle.
MARY ROONEY: And Randy, I see your hand up.
RANDY AUERBACH: Yes. The presenters are very civilized, we raise our hands.
I think one of the other challenges ‑‑ I think all of these points are excellent. I think one of the other challenges is how could academic medicine keep pace with technology in a way in which we are not creating yesterday's interventions without optimizing what we could do.
I mean the most obvious candidate in the room is AI, which we're developing tools that are condition‑facing for all of the appropriate caveats in terms of not wanting to have any estrogenic effects. But the challenge, of course, is that by the time something is to form, you know, we run the risk of it being dated by the time it is ready to be deployed in the context.
And I really think, you know, there is obviously mechanisms through NIMH including SBIR and others, but I think we really have to be careful if we are putting forth the energy to rapidly improve detection. And I think we are not quite there, but we are getting better. Evan and Matt and others work is showing that. Evan's work is showing that we are able to deploy this in different types of places which is really exciting. And I really worry that when it takes form, it's going to ‑‑ we will be playing kind of Nintendo, which I know Matt understands, but we are really in this advanced gaming world. And I think this is a challenge that we think about a lot, but I don't really have a particularly good answer for.
MATTHEW NOCK: I was going to say my clippy‑based intervention is just about ready for deployment.
RANDY AUERBACH: I love it.
MARY ROONEY: Yeah, it is a challenge, right? And I think it is true in a lot of tech areas.
Do you know of any areas in healthcare where ‑‑ and technology where that ‑‑ somebody is doing a good job of addressing that even now? Outside of mental health even?
RANDY AUERBACH: Well, I think it's rife with examples of it not being done well.
MARY ROONEY: I know. I can think of those.
RANDY AUERBACH: I think the problem is that most of the digital health options out there that are available in the App Store or Google Play are without any empirical evidence. But their UX is to die for. And so I think that is kind of what we're up against.
MARY ROONEY: Yeah.
EVAN KLEIMAN: I think it's a challenge. First of all, more of a public health kind of challenge is better educating consumers and providers about apps. But then also figuring out how we can iterate research quicker in a way that is still scientifically working with regulatory bodies to get things approved quicker which I think everyone would agree would be great.
But then also figuring out ways to have the apps that we use look better than just like a standard grade research app to at least get closer to approximating the final product. But I think it requires a lot of outside areas, you know, outside of just mental health to make that work.
MATTHEW NOCK: And, of course, there are, you know, millions and millions of dollars going into monitoring people and intervening in terms of sales and marketing and trying to sell us the T‑shirts and sneakers we want and so on. And it does generally a good job, at least in my case, but it is often wrong and totally misses.
And that is a concern when your outcome is suicide or substance use or alcohol use and so on. And so I think there are examples where it is well done. But given the stakes here, you know, we really want to make sure we get it right before we deploy. To Randy's point, it has to be slick and shiny but also science based and accurate for us to feel ready to deploy it at a large scale.
MARY ROONEY: And, Walter, did you have something? I think I ‑‑
WALTER DEMPSEY: Just, yeah, so my only addition on this was one was, again, coming from a stats perspective.
A lot of these like Amazon, right, they can iterate on a weekly scale because they have thousands or hundreds of thousands of people who they are getting data points on in terms of these different factors.
But for us, right ‑‑ I agree with Randy that we might make a Nintendo in the era of, you know, Nintendo 64. That was my ‑‑ growing up, that was mine. But the biggest issue for us is also like if we want to iterate at even a remotely large scale, so a hundred people in a given study to inform the next study, that iterative process is very slow still.
And understanding how to iterate appropriately. I think, you know, as people build these JITAIs, one of the hardest things, too, is thinking about out how to incorporate multiple components into a single intervention package. And that optimization phase is super important but hard when you can only recruit, you know, 40 people per phase, it is not a lot of information. So I think that the scale issue becomes a huge one there.
MARY ROONEY: Yeah. And speaking of things that are difficult to kind of move forward on quickly or iterate on quickly, there is this kind of underlying interest I guess in really like personalized just‑in‑time adaptive interventions where they are really adapted to the individual, right, and responsive to the individual.
And what are some of the challenges with that and why haven't we been able to get there yet?
MATTHEW NOCK: A big one is the statistical challenge of building person‑specific models that tell us when this individual is at risk.
Shirley Wang, a researcher at Yale University has a paper just getting ready to come on in Nature Mental Health on idiographic models. So using a person's own data to show when they're at risk, to predict when they're at risk. And then figuring out the topic of today's meeting, which intervention for that person at this time. It is just several layers of nuance and of complexity.
We are not good at knowing which people are at high risk. We are working on when those people are at risk. And then figuring out when this individual person is at risk and what is the right intervention is just several steps from where we are.
So I think there is great promise here, but we are not yet approaching it in terms of the data, the accuracy of the data we have.
Walter Dempsey: And the only two things I have on this. One is that the right intervention for that person may change over time. So not only is it that the intervention that you want to provide for the person in the moment is hard to learn, but then that intervention it may change after a month or so.
And so I think ‑‑ especially with sequential. So I know a lot of these JITAIs, right, they're sequential in nature. A large headache is figuring out the right thing to do and also adapting within a person over time to their changes.
Evan Kleiman: This is also a massive computer science challenge, too, an engineering challenge.
Like some of these things, integrating wearables and GPS requires a lot of computational power in the moment on someone's phone. And some of that is happening, but it really is the confluence of better understanding the mental health phenomenon but the statistics and modeling and computer science and battery technology, all of these things catching up to actually make it work in real‑time.
NICK ALLEN: Well, and that issue interacts with another important one which is, you know, questions of privacy and security with data. Because one of the solutions to privacy and security is to do what's called edge computing. They compute everything on device. Nothing goes to the cloud that could have privacy implications but that requires a lot of computing power on device.
I will say I'm personally probably pretty bullish about what devices are going to be able to do on the edge. A lot are including incredibly sophisticated chips and technology that will allow that personalization to happen.
But the other thing that I think is really fascinating, these questions that we are raising about personalization is that these are questions that working clinicians are trying to answer every day in their clinical work. Is this client at risk now? When should I pivot my strategy? You know, like it is just like these questions are as old as clinical practice.
But what we have is a new kind of data. And it is an exciting kind of data. But we need to really work how to bring these questions to bear to that fundamental clinical question because clinicians work with individuals. And they are always asking the question about this individual.
MARY ROONEY: Exactly, yeah. Wonderful. Excellent discussion, you guys. I think that's a great place to wrap up.
We are at 3:00. And I just wanted to again thank everyone for their wonderful presentations today. Excellent discussions and just overall enthusiasm for this area of research. It's very exciting and I'm excited to see where it goes next.
MATTHEW NOCK: Thank you, Mary, Stephen, and NIMH.
MARY ROONEY: Take care. Thank you, all.