Skip to main content

Transforming the understanding
and treatment of mental illnesses.

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

Virtual Workshop: Transforming the Practice of Mental Health Care

Transcript

Mi Hillefors:
For your information, I've started a recording now, Holly. 

Sarah Lisanby:
Justin Baker, I just want to check your audio.

Justin Baker:
Any luck?  No audio?

Sarah Lisanby:
Yes.  And just checking if Joshua Gordon's audio was working?  So, good morning everyone, and welcome to the NIH sponsored workshop, a virtual workshop on transforming the practice of mental health care.  And I want to give a few minutes while our attendees and panelists sign on.  My name is Holly Lisanby, and I really am delighted that all of you are able to join today.  I see now that we do have Joshua Gordon signed on.  And Josh, just wanted to test your audio.

Joshua Gordon:
Hi.  Good morning, everyone.

Sarah Lisanby:
Good morning.  So, I've just opened the workshop, and I want to welcome everyone to this workshop.  And to get us started, I'd like to introduce the director of the National Institute of Mental Health, Dr. Joshua Gordon, who is going to give us some introductory remarks, and then I'll come back with some format and goals for the workshop.  So, welcome, Dr. Gordon.

Joshua Gordon:
Good morning, everyone.  Welcome.  Thank you for joining us today, both the panelists and the attendees.  I'm really excited about this workshop.  It certainly will hopefully be a break from the humdrum every day.  I know that most, if not all, of you are, like I am, working from home, and that your lives have been quite disrupted by what's going on.  So, I want to acknowledge that first, and to thank you from -- for taking time out of your schedules, and for dealing with childcare, whatever else you have to deal with.  And we understand if there's a bark in the background, if there's a kid that comes, you know, and needs something.  So, don't worry about that.  But we're really pleased that you could all join us today. 

One question you might have is why are we doing this?  Why is the NIH having a workshop on transforming the practice of mental health care?  This workshop is meant to be really, truly forward thinking.  We're trying to imagine what the practice of psychiatry could be like, if we took advantage of the tools that are coming online now.  We know that we could do some pretty amazing things, in terms of the understanding and treatment of mental illnesses, if we can harness these tools.  What kinds of tools ma I talking about?  I'm talking about the ability to get broad data sets from a number of different features about our patients, be they be features of the brain activity, features of their past experiences, features of their longitudinal course. 

If we can harness all these data -- oh, and I suppose my geneticist colleagues would kill me if I didn't say features of their genetics, as well.  If we can put all this data together, we can learn a lot more about our patients, learn about a lot more about the illnesses that they are -- that they are struggling with.  And, potentially, move from a practice of a psychiatry, which is based on self-reported symptoms and based upon best guesses on the part of the clinician and the patient as to which treatment options are going to work best for them, are going to be most tolerable for them, et cetera, into a data driven practice, where we think about patients, yes, from the personal narrative perspective that will always be part of psychiatry, but also from the perspective of what's going to work best for them, and what's going to give them the greatest chance of feeling better and the least chance of side effects. 

My model in thinking this is the modern day oncologist who gets as much data about the cancer that her patient is suffering from, and gives that patient a series of treatment options, each of which is accompanied by a probability of success, and also, a probability of side effects.  And, then, with that information in hand, the patient and the doctor can work together to decide which treatment is most appropriate for that particular patient in that particular situation.  This kind of model of the practice of psychiatry will require, number one, a lot more research.  We need to be able to, obviously, answer lots of questions about our patients, about -- both in aggregate and in individual in order to be able to imagine this kind of psychiatry. 

It will also require an infrastructure.  It will require large data sets that are integrated and that have, both, depth, in terms of depth of data about individual patients and breadth, in terms of large numbers of patients.  It will require knowing what kind of data to -- that goes into these data sets.  And, finally, it will require a decision engine.  It will require an infrastructure that will allow psychiatrists and patients to put data in and an algorithmic infrastructure that will process that data and give probabilities and likelihoods back to the psychiatrist, in order to aid the patient.  So, that's the big picture we're talking about. 

We're imagining a different kind of mental health care, one that is patient-centered but data-driven.  And we're trying to think what do we at NIH need to do?  How can we take what is really a wonderful revolution of new technologies and really bright people, many of whom are assembled here today, who are working with these technologies, to learn about our patients?  But how -- what can we add to this mix to bring this future potential into reality?  And, finally, I'll just say what I said at the beginning, which is this is a very forward-thinking workshop.  We're not imaging that we can accomplish this mission in the next six months, in the next five years, maybe not even in the next 10 or 15 years.  We're really thinking forward, but we're recognizing we need to lay the groundwork now to be able to achieve that vision in the future.  So, again, I really want to thank you.  I'm very much looking forward to the day, and I'm going to turn it back over to Holly.

Sarah Lisanby:
Thank you, Dr. Gordon.  And I'd like to thank you, Dr. Gordon, and Dr. Shelli Avenevoli the Deputy Director of NIH.  And, also, I'd like to thank our moderators, who have graciously volunteered to help organize the content and the discussions we'll be having today.  The moderators for our first session on the future vision of where we might be able to bring mental health care through the advantages of data and technology, specifically, Dr.'s Andrew Krystal and Justin Baker.  I'd like to thank our moderators for our second session, which will take us through learnings from existence networks and efforts underway, [inaudible] Doctors Guillermo Sapiro and Helen Egger.  Our third session moderators will help us understand the challenges of getting from where we are today to where we want to go in the future.  That session will be led by Dr. Jenn Wilsack and Jeff Gerard.  And our fourth session, on ethics and special populations, co-moderated by Doctors Deliete Justee and Jeremy Veenstra-Vanderweele.  And, I'd also like to thank our program committee, starting with Monica Carter, who is a known to most of you because she really made this happen, and really is doing the lion's share of the work here. 

I'd also like to thank the acting Deputy Director of the Translational Research, Dr. Mi Hillefors, and the members of the planning committee here at NIH, Dr.'s Jenny Pacheco Susan Azrin, David Leitman, and Adam Thomas.  I'd also like to thank our speakers.  We know that devoting most of the day is a tremendous ask of your time.  We've invited you because of your complementary expertise that will help us think through where research can make the most impact in transforming mental health care.  And, as Dr. Gordon acknowledged, we understand that the world has changed since we first organized this workshop.  And we understand, as a consequence, some of you will have to be leaving early or not participating in certain parts of this.  And we want this to be flexible.  Please -- we understand that you need to come and go based on your time schedules and challenges.

So, the purpose of our meeting, as you heard Dr. Gordon explain, is to assemble a group of experts to help understand how multiple streams of data, including electronic health records, biomarkers, genomics, digital tracking, physiology, and imaging, can be used to inform treatment selection in mental health care in the future.  A key, deliverable from this could be technologies such as clinical decision support, software tools or other aids, to aid treatment selection.  We've designed this program today to hear from a large number of experts from diverse disciplines in a short period of time, to help us shape our thinking on these topics at a very high level.  The format of our workshop; it is open to the public and you -- this will be recorded and archived in a publicly accessible format.  For those of you who need it, we do have live captioning, which is available in the multimedia viewer, in the lower part of your screen.  You do need to click on that to activate the live captioning. 

A few housekeeping notes: as you join, please mute your microphones.  We understand -- we do have a brief break in the middle of the program.  But we don't have a formal lunch break or meal break, so please do take breaks when you need to.  We're also having attendees from various time zones, and so that resulted in some aspects of our time frame here.  Please note that there's a chat function and a Q and A function on WebEx, which you can use to direct to the panelists during the discussion sessions.  So, those are actually all of my introductory remarks.  And, so, now I'd like to turn to introducing our first session.

So, our first session is focused on the desired future state, what we -- what problems we hope that new technologies and data-based approaches can solve for us, to improve mental health care from the perspective of the patients, providers, health systems, and payers.  And, so, I'd like to introduce the co-moderators for Session Number One.  Dr. Andrew Krystal is a professor of psychiatry and vice-chair for research at the University of California, San Francisco.  He's an internationally renowned expert on sleep and mood disorders, and he's joined by Dr. Justin Baker, who is Scientific Director of the McLean Institute for Technology and Psychiatry, and the Director of the Laboratory for Functional Neuroimaging and Bioinformantics at McLean Hospital.  Now, I'd like to turn it over to Dr. Krystal to kick off our first session.

Andrew Krystal:
Thank you very much, Holly.  I just want to make sure I can be heard.  Thank you.  As Holly mentioned, the first session is about the big picture, what it is that we hope will be achieved in the future, and what this will do for treatment of people with mental health problems.  And we're going to be looking at this from the perspective of some key stakeholders in this.  That is, the providers, health system's payers, and you -- we have brought together a group of distinguished scientists to help us with this process.  And each one of these individuals will be presenting one slide and will have two minutes to discuss their perspective, and myself and Justin Baker will be moderating and keeping everybody on track.  And after the presentations from our experts, or the panelists, for session one, we'll have a time for question and answer.  Question and answer will -- questions will be entered using the chat function of the web interface.  Now, I want to turn it over to Justin Baker, who will start us introducing the panelists for session one.

Justin Baker:
All right.  Well, thank you, Andy, and thanks everyone, Holly and Josh who are hosting this exciting event, which I'm very pleased to be part of this morning.  I think we all recognize that, in this time, especially, we're faced with new access issues for mental health, and though the visionaries you're going to hear from today are really charting the course for the -- that desired future state for psychiatric practice.  And, so, we're going to hear from each separate speaker here, this morning.  Very briefly, each speaker is going to present a single slide representing their vision for the future of psychiatric practice.  And we're going to start here with Terri Gleason, who is the Director of Clinical Science and Research -- Clinical Science Research and Development Service, in the Office of Research and Development at the VA central office, in Washington, D.C.  Dr. Gleason has a distinguished research career of nearly 20 of VA service and has held many leadership positions across the government.  And today, she is going to be speaking to us about her vision and the VA's vision for where psychiatric practice is headed.  So, Terri, please.

Terri Gleason:
Thank you, and good morning everyone.  I'm just delighted to be here.  And, yes, it's an extremely busy time for us in VA, but I think it sets us to be just right in terms of thinking about visioning and, actually, a way forward.  As Dr. Gordon described, a way into the future and how we get there.  So, the place that I sit in VA is the research office, which is embedded in the national health care system.  And what that means, of course, is that we're in a unique position to generate questions that need to be answered for our veteran population, drive the research for them, and bring them back the results right into the health care system.  So, the depth and breadth of data and the richness of data that can be generated, if we are greatly organized and forward-thinking, means that the potential is just, possibly, boundless.  So, there's so many resources that can be brought to bear, in terms of thinking or conceptualizing a learning health care system.

So, right now, today, you know, we're in the midst of this crisis.  I saw numbers yesterday that reflected 1,600 cases in VA of confirmed novel coronavirus cases.  And, unfortunately, we're at a level of 53 deaths, as of yesterday.  These are, you know, our veterans who have served, and I think it's a really important point about how sobering the moment is.  On the other hand, I do want to bring to the group that what that means is we can actually capitalize, now, on the learning lessons that we're going through as a system, and for research, that just makes this conversation all so timely.  So, what I'm hoping for -- it's not reflected on my slides, but what I'm hoping for is, in this vision and moving forward, we can take all the learning opportunities of things that seemed like challenges, whether it was related to regulation or policy, and so forth -- the delays in research, the using of data, and so forth -- we can capitalize on that and turn it into a situation where we can make progress faster, because now we know it can actually happen in real-time.  So, I did just want to make that global note.  Our AI team, are busy, working 24/7, data scientists across VA, to capitalize on all the data that we need to draw conclusions for the coronavirus, and, in the same regard, we can eventually put that experience to work for mental health in other scenarios, as well.

So, just briefly, I'll walk through, I mean, the vision for mental health research in VA -- is it -- as I said, a diverse system, and I'd like to take into account, at the patient level, we need the best evidence base, and that's obviously produced by rigorous science and using everything that we can.  I think it was already listed, but all of the richness of data that can be brought to bear  for the patient, and directing that, and what that means, you know, from the time they walk into the door, until the treatment plan is developed, and then the response is actually known and recorded in a real way, and accessible to research.  For the providers, we need evidence-based treatments, and we need the level of knowledge that they can use that takes in consideration patient and system level factors, to derive the best treatment plan for our patients.  I want to bring in comparative effectiveness research questions, because I think that should be a part of the vision and how that could work in the future, as a tangible, I think, approach that we can actually make progress on quicker if we all get organized around that.  And, then, of course, at the systems level, talking about what a learning health care system really needs and, as we capture data at the entry point for patients and move that through research level analysis, and bring it back to health care, implementation is really the complete picture, I think. 

The challenges, very quickly -- all of -- all of the above, I mentioned, including data integration that starts from the beginning with a common set of assessment tools -- perhaps not so much leaning on self-report, but considering what we can bring in terms of organization and standardization of data, across the system to begin with.  So, I'd like to stop there, just keeping note of my time, and I'm happy to stay at least through this panel with you today.  Thank you, again.

Justin Baker:
Thank you so much, Terri.  Yeah, speaking of time, I'm going to -- I'm going to, actually restrict -- let's try to restrict introductory remarks to just a couple of minutes for the -- each of the initial speakers.  I mean, what we'd like to do is broaden it up to a discussion among the speakers for the remainder of the hour.  So, if that's okay, I'll go ahead next to David Ledbetter.

David Ledbetter:
Yes.  Can you hear me? 

Justin Baker:
Yes. 

David Ledbetter:
Okay.  Thank you.  I'm glad Josh got in a quick mention of genetics.  I'll try to summarize where we are in genetics with a future vision that will have complete genomics information on every patient in a clinical setting and every patient participant setting.  Our goal at Geisinger is to have complete genomic sequence information on every baby born, as part of routine newborn screening, to inform health care throughout the lifespan and to allow us to prevent disease, rather than always reacting to disease.  So, at a high level, for many common adult diseases, there's evidence from twin and family studies, a very high heritability, or a significant genetic contribution.  These range for things like cardiovascular disease risk, obesity, diabetes, many other common diseases, from about 20 or 80 percent, or heritability estimates pf 20 to 80 percent, and, on average, 50 percent.  So, the pie diagram there -- and you can adjust the relative sizes of each category to your preference.  But I'll use 50 percent genetic risk factor.  What I like about genetics is the human genome is big.  But it's finite.  We now have a very good map, and we have tools to measure, accurately, the DNA sequence of the entire genome of all our individual patients, all individual research subjects.  And I think that allows us to, sort of, remove one degree of freedom and complex, multi-factorial traits, on a clinical and a research basis. 

To the right, I've shown the two best characterized psychiatric disorders, in terms of common genetic variants, which comprises the heritability index.  Coming from GWAS studies, but now being translated into polygenic risk algorithms to predict risk of disease based on hundreds and thousands of different small effect contributions.  In autism, heritability is among the highest of any disease known, at .9, or 90 percent; for schizophrenia, also quite high, at .8.  And people are working on developing polygenic risk scores, in order to predict risk of disease for these disorders.  For rare variants, of rare mendelian genetic disorders, a copy of number variants or single nucleotide variants, in pediatric autism, depending on the cohort, up to 25 percent have a single large effect size rare variant that's the main contributor to autism. 

In schizophrenia, a much lower yield of about 5 percent, with 22q11 deletion syndrome being the most common and well-known, well-described risk for schizophrenia.  Below that, I've shown that to get this genome wide SNP data is now about $50.  And the algorithms for developing polygenic risk analysis are getting better and better.  People in the field are not using them and planning to use them as stand-alone but incorporating them into other disease prediction risk, the most advanced in cardiovascular risk prediction and some cancer risk prediction.  Whereas, DNA sequencing to identify rare variants is still in the $500 to $1,000 range, even at scale, for exome or genome sequencing.

The bottom conclusion is stratifying cohorts by genetics decreases heterogeneity; I think will increase power in identifying important environmental life style factors contributing to risk, and help us to more efficiently identify effective interventions and treatments, individualized to patients genetic background or the presence of a rare genetic disease.  So, the goal, actually, is for clinicians and researchers that you won't have to think about genetics.  We will have genetic data on everybody -- every individual patient, every research subject.  And that will be built into predictive algorithms, other models, characterizing cohorts of patients for clinical use and for research.  I'll stop there.

Justin Baker:
Excellent.  Thank you so much, David.  That's exciting.  So, next, we're going to hear from Amit Etkin and Corey Keller.  Amit is the founder and CEO of Alto Neuroscience, and he's a professor currently on leave to Alto, in the Department of Psychology and Behavioral Science at Stanford University.  Corey is a professor in psychiatry and behavioral sciences at Stanford, and also the Chief Medical Officer with Alto.   So, Amit.

Amit Etkin:
Great.  Thank you for organizing this event.  Let me just say, very briefly, you know, one view of the future in psychiatry, which is really to try to break as much from the past as we can, seeing what -- oncology was mentioned earlier -- is out there, as an example.  So, you may have seen, when the innovation was coming by there, some articles there on the left -- 2008, 2018, the results from very prominent meta-analyses, where in 2008 Irving Kirsch looked at all of the antidepressants that were submitted to FDA and did a meta-analysis, not just published but unpublished data, and found that there was a small effect size between drugs and placebo, and the conclusion in the laypress [spelled phonetically] was that these drugs don't work. 

Andrea Cipriani, in 2018, did pretty much the same thing with more advanced methods, and came out with a conclusion, that the same newspaper memorialized, is, in fact, the drugs of working.  But, in reality, both of these actually had the same exact effect size, and this was meant to illustrate our own change, going from, in 2008, feeling like these drugs work and then our hopes being dashed, and now our hopes being so low that any evidence of effect is reason to celebrate.

My argument is really that perhaps this is a misplaced focus.  And our focus should be on understanding why the effect size is so small in the first place.  And perhaps the future state -- our desired future state is just completely end one-size-fits-all psychiatry, in terms of treatment development and use, and really adopt what oncology has done, which is really a more precision approach for everything. 

So, let me give you an illustration of how this could work.  This is from a recent set of papers we've published on NIH sponsored study called EMBARC.  EMBARC is an antidepressant treatment prediction study that took 300 depressed patients -- you can see, in the panel there, the left-hand graph -- gave them sertraline and a placebo, and sertraline was found to be nominally better; 42 versus 34 percent response rate.  But not very impressive.  What we did, though, was ask the question of is it the -- that the drug is not effective at all, or the fact that our diagnosis is imprecise that’s really the issue?  So, we developed a machine learning -- a novel machine learning method, trained on EEG data, given prior to -- acquire, rather, prior to the treatment, and found a pattern that was strongly predictive of sertraline -- could differentiate that from people who respond to placebo. 

And, if you look at the figure on the righthand side, this is snow stratifying people based on their cross validated EEG based prediction of sertraline response.  And what you can enter percentiles, the lowest quartile, middle quartiles, and upper quartiles -- and what you can see is that, very clearly, only a minority of people are responding to the drug.  They have the highest quartile -- based on EEG, has an almost 70 percent response rate, double that of placebo.  So, there's important lessons there, not only that there are people who respond really well to these drugs, and these drugs are actually very effective -- these are -- imprecision in diagnosis, that's the issue -- but that we're driving the effects of drugs or device based intervention studies based on tiny subgroups out of the whole population of patients.  And, so, the hope for the future is really an end to current pharma as it is in psychiatry and opening the door to how pharma is in oncology, which is developing a drug for a group of people based on an understood approach and set of patterns.  I'm going to stop there let Corey take over.

Corey Keller: 
Thanks, Amit.  I'm just going to say a few thoughts about the challenges for implementing biomarker research in clinical care.  When we think about our future state, we really aspire to have our research be real world, multi-modal, large scale, and longitudinal, with minimal patient burden.  This will allow us to develop novel. powerful biomarkers and treatments.  Unfortunately, it's not that simple, and implementing biomarker research is really a balancing act.  When we think about real world research, we want our research to be as generizable as possible, but in the real-world high complexity of disease and comorbidity.  The keeping it scientifically rigorous is not a simple task.  Do we utilize strict versus loose inclusion exclusion criteria?  Do we undergo naturalistic data collection or randomize clinical trials? 

When we think about multi-modal, which biomarkers do we use?  There are many at our disposal and many that relate to mental health, including EEG, MRI, genetics, passive digital biomarkers.  The list really goes on.  When we think about patient burden, how do we maximize the frequency of the assessments and quality of biomarker collection, while minimizing patient burden?  And, finally, analytics.  How we do combine different biomarkers with different time scales, sizes, and dimensions together, in a meaningful way?  In all, for the future state, finding that right balancing act is critical to keep scientific rigor and patient satisfaction high, while minimizing costs and patient burden.

Justin Baker:
Excellent.  Well, thank you so much, both of you.  It's very exciting.  Next, we're going to move on to Timothy Mariano.  Timothy is a medical director in late stage clinical development, as Sage Therapeutics.  He's also the associate investigator and staff psychiatrist at the Center for Neurorestoration and Neurotechnology at the Providence VA Medical Center, in Rhode Island.  Tim.

Timothy Mariano:
Thank you.  It's a -- it's a pleasure to be here with all of you this morning.  So, I'm going to talk about two interrelated aspects that I think will be predominant in the future, involving digital mental health tools and, also, rapid treatments in psychiatry.  And, really, that underpins your two aspects of what I think our future vision, our future state, may entail.  I expect that we'll see a movement away from more traditional, and, if you will, low temporal resolution measures, such as rating scales that can capture a patients state at discrete timepoints, that may be weeks or even months apart, and that those will be replaced with digital biomarkers, including continuous biobehavioral data.  And this will be backed by a near ubiquity of the hardware, including not only mobile devices, such as telephones or fitness trackers, but also integration into everyday objects, which might include jewelry, clothing, even medication tablets or packaging for food products, all providing the potential for data that must be weighed, of course, with appropriate ethical considerations. 

In line with this, in the future, we'll probably also see a movement away from traditional antidepressants that may take on the order of two months to show effect to more rapid acting treatments that can demonstrate meaningful clinical difference in, perhaps, days to weeks.  And these could include noninvasive brain stimulation technologies.  I give a couple of examples on the slide of current and emerging ones.  But also, novel pharmacologic agents, such as esketamine and brexanolone, too -- recently approved rapid acting antidepressant treatments. 

The eventual AI ecosystem will be challenged to juggle stakeholder priorities that may, at times, be competing.  I tried to break this down along a couple of dimensions, here.  Treatment developers -- that will be the scientists and companies that are involved in developing these treatments -- will need rapid predictors of response or non-response to their candidate modalities.  Today, we do use digital health tools and clinical development in the biopharma space, but oftentimes, they're included at phase three or post marketing, which I would argue is too late in the process.  It really should be integrated much earlier.  Also, today, inclusion of those tools usually adds substantial burden for sponsors, for sites, and for patients and participants.  There's usually an additional app that must be installed, additional devices that must be carried, additional visits, additional data entry.  And, also, competing goals for what the treatment developer wishes their eventual device or technology or medication to have in the marketplace. 

One is, you know, with additional health tool, is it a purely diagnostic, a therapeutic tool or some combination of both?  And that must be weighed against the candidate treatment, and whether that will purely a monotherapy treatment or an induction or an augmentation treatment.  All of these would affect how we might want to integrate such tools.  Providers and payers, of course, will want an early warning system provided by the AI ecosystem to identify patients who aren't doing well and to suggest alternative treatments.  And, last but not least, patients themselves will want an AI ecosystem that not only provides some information that they might be doing worse.  That might be important in diseases such as schizophrenia or bipolar disorder, where sometimes patients do not have complete insight when they are starting to do less well.  But patients may also want some therapeutic option; for example, some provision of stopgap treatment to bridge them until they're able to see their provider, either virtually or in person.  And that can comprise web-based CBT bots or online group teletherapy, to give two current examples.  Thank you.

Justin Baker:
All right.  Thank you so much.  And, now I'm going to hand it over to Andy to introduce the remainder of the speakers, and then we'll move on to a discussion of the future vision.  Andy.

Andrew Krystal:
Thank you, Justin.  Our next speaker is Brian Shiner.  Dr. Shiner is a staff psychiatrist at the White River Junction VA Medical Center and an assistant professor at Dartmouth Medical School, in the Dartmouth Institute for Health Policy and Clinical Practice.  He's been involved with research at the VA Medical Center, evaluating the effect of new primary mental health care clinic on access and quality of depression care, and has completed multi-site research using performance measurement and interview data, to evaluate clinical processes that promote higher quality and improved efficiency.  Dr. Shiner?

Brian Shiner:
Yeah.  Dr. Krystal?  Hello?

Andrew Krystal:
Yes.

Brian Shiner:
All right.  You can hear me now?  Okay.  Great.  So, thank you, Dr. Krystal, for using a bio from 10 years ago because that gives me a picture from 10 years ago when I had hair.  So, that's really pleasing to me.  My desired future state for the health care system, and for the VA in particular, where I work, is that we leverage the data produced in routine practice to learn and to plot our path forward.  And I'm not the first person to say that.  The IOM and the National Academy of Engineering, 15 years ago, said that in their report and called it a learning health care system.  And, certainly, the IOM CER report a few years after that kind of said the same thing, about how we should use health care data. 

I think that, for me, in the work I do as a psychiatrist, I think there are two things that we can do that are important.  One is to kind of look at current practice and see how it works in the real world.  And currently, I'm wrapping up a DOD grant, where we're comparing five medicines that we know were effective for PTSD in clinical practice, using patient reported outcome data, as well as pharmacy data, and data from psychotherapy notes on what treatments they're getting for PTSD at the same time, and, looking at not only the effect overall, but in specific clinically important subgroups, such as women, people with traumatic brain injuries, and so forth. 

So, from that work, we're kind of moving into a new area of research, which I hadn't really imagined up until about two years ago, when I started to write a R01, with my colleague Jamie Gradus.  We're going to use the data that we've developed for our DOD funded study, which is about two million patients with PTSD in the VA 2000 to 2019, about 20 years of data, 500,000 have pre-post outcomes information about their -- about their PTSD called the PCL.  We're going to look at about 2,000 drugs that are FDA approved.  We'll be looking using a method called tree scan, developed by our colleagues, Martin Kulldorff and Krista Huybrchts, down at Harvard, to scan for drugs and classes of drugs that may actually be effective for PTSD that we don't know about.

The unique thing about our tree scan is we'll be considering drugs in terms of mechanistic classes, you know, such as alpha to antagonist or whatever, rather than clinical classes, such as, you know, anticonvulsants or antihypertensives.  So, we'll be looking at mechanism rather than what they're used for.  My colleague Jang Wi [spelled phonetically] has been working with me for many years now on the DOD work, where we're comparing the currently approved drugs.  As we discover things with tree scan, we'll be comparing the drugs we discover with tree scan to drugs that don't appear to be effective for PTSD but are used for the same thing, in order to get somewhat, at our causal analysis.  And, then, of course, once we're done with that and we find our top candidates, we'll be doing the same thing to look at safety of the drugs that we think should be studied or used moving forward. 

So, that's just an example.  I know there are lots of different examples and thoughts about how to learn from health care data, but this is kind of how we're doing it.  So, just to give you a sense, I think there are -- I wrote two challenges.  I think there are actually three.  The third one I can't write down.  But, so, one big challenge is that the VA is a unique computing environment.  So, we have data on patient reported outcomes that have been collected since about 2008.  We have really very granular information about prescriptions and when they were picked up, when they were mailed, what the doc -- what the doctor said in the comments bar, et cetera.  And we also have access to all notes written in psychotherapy encounters, so we can train computers to read those notes and say whether people got, you know, psychotherapy that's effective for PTSD while they were getting these medicines, for example.  So, we can account for confounding a little bit better. 

So, you know, whether that kind of data exists outside the VA is a big question on my mind.  I haven't heard of other health systems having all that, although I'm sure they will soon.  It's also been very challenging to share our data sets with people outside of the VA.  The VA -- it just has a lot of challenges around making sure we're protecting health care data at the same time that we're learning from it.  So, I think that's -- our colleagues at VINCI, which is the VA data repository.  I've been working on that for several years, and hopefully, there will be something coming soon. 

The last thing is the VA's about to go through, or is beginning, right now, a transition from our homegrown EMR VistA to a contract with Cerner, and, you know, things may be better in the end, but for -- you know, for a period of as many as 10 years, there may be holes in the data, and the data that we end up with in the end might be better, but it might also be different.  So, we might able to ask different kinds of questions, but perhaps not the same questions we've already been asking.  Thank you.

Andrew Krystal:
Thank you, Dr. Shiner.  I just want to remind our speakers, because we're running a bit behind on time, to try to keep their remarks to two minutes, maximum.  I know there's a lot of exciting points to raise here, and we'll all have – hopefully, have plenty of time to bring them up in discussion.  Our next speaker is Diana Clarke.  Dr. Clarke is an experiences epidemiologist research statistician, an educator with a long history of working with academic institutions, specialty organizations and behavioral health agencies and hospitals.  Please, go ahead, Dr. Clarke.

Female Speaker:
Dr. Clarke, you may be muted.

Diana Clarke:
Sorry about that.  Yes, I was.  So, I'm going to jump right in, then.  One of APA's mission is to help psychiatrists and other behavioral health providers provide high quality care to patients with mental and substance use disorder.  And we know that their variability in where these providers are, where they practice -- variability in terms of how they practice, and then, also, to patient population they serve.  So, they have variability in terms of access to different types of resources.

And, so, it was part of that that actually led to us, you know, actually developing this national mental health registry that we've developed that APA calls PsychPRO.  And the goal for PsychPRO was to kind of bring together data from information across areas – practices, whether it’s solo practices, academia, emergency, outpatient, inpatient, and primary care.  Because we know that majority of found patients with mental and substance abuse disorder do present to their primary and get majority of their care from primary care.  And, so, it's trying to bring together all of this data, whether it's data from electronic health systems, data from patient portals.  And, also, it was this some grand idea that we'd also try, for patients that agree to do this, actually also bringing data from any kind of wearable technology. 

And the promise of this is, if we can actually bring -- get all of these types of practices and different settings -- to actually join PsychPRO and bring in all of this data, the promise of that would be that with using all of the big data analytics, we would be able to share data, not only with the patients who are providing this information -- because a lot of -- in terms of speaking with patients, we find out that sometimes they have a problem with providing a lot of data and getting no information back.  So, a way that we can actually share information with patients, but then also use that information to actually develop practice guidelines, to actually create or identify effective measures of psychopathology, so we can actually have improvement in our diagnostic manual, or any kind of diagnosis for mental health and substance use disorder.

And, then, also use that information to provide real-time decision support and real-time interventions to the clinicians, regardless of where they are and where they're practicing.  And, the hope is that this type of integration of different health systems and different data types and patients from various part of the country, we could actually enhance the quality of care to mental health and substance abuse patients.  And that would be because the idea is to create the system that -- system learns, provide information to patients and to providers.  And as we get information from them, in terms of how things are working, we identify where quality of care is great versus poor, and then create learning opportunities for providers, and, actually, also provide resources and information for patients as well. 

So, you know, some of the challenges that we do see with this is -- and I've listed some here, but one of the biggest ones is actually challenges in bringing on some of the largest settings, especially academia, where you get into the issues -- legal issues, et cetera.  And it's not just the legal issues, because we've worked out most of those.  It's where -- the trust and sharing data.  And I know the person before talked about sharing data outside of the VA.  And, so, it's kind of overcoming some of those barriers, and then also overcoming barriers with the different electronic health systems, and whether they're willing to allow us to integrate with their system, to pull data into their register -- but not just to pull the data in, because the usefulness of this would be to actually push the data back -- information back to the EHR, so commission can use that information in their clinical care.

So, that's how -- one of the -- a couple of the challenges that we've actually faced.  We also know that the [inaudible] challenges, as well.  I mean, as part of this registry -- because it was built to provide -- as a quality improvement tool, and to be used secondary for research purposes.  Then we get into thinking about the ethics of “Do we need patient consent?”  And because it's primarily for quality improvement initiative, we don't necessarily require patient consent to actually use the data to inform those things, where we do require -- based on the common rule -- where we do require patient consent is to get patient to consent to be re-contacted for future research studies. 

And -- but the -- it has great promise, in the sense that it creates a repository of data that can be used to answer a lot of interesting questions; can, as I said, drive revision to diagnosis and form practice guidelines.  But it also provides a repository of patients who have actually completed consent to be re-contacted and can be available to participate in different types of research studies.  And we know one of the limitations is that many patients with mental and substance use disorder doesn't necessarily make into a lot of these research studies that have been done to date.  And, so, I'm actually going to stop there, just because I'm trying to make sure I stick with the two minutes, and I'm hoping I didn't rush too much there.

Andrew Krystal:
Thank you very much.  Our next – sorry.  Thank you.

Female Speaker:
Those who are not speaking, please mute yourselves.

Andrew Krystal:
Our next speaker is Dr. Neal Ryan.  Dr. Ryan is an expert in pediatric mood and anxiety disorders.  He's led large scale studies of behavioral change in children -- adolescents, diagnosed with mood disorders and has investigated the effectiveness of therapies; currently, the principal investigator in a large NIH funded project related to the neural biology of childhood anxiety and depression.  Dr. Ryan.

Neal Ryan:
Hi.  Thank you.  Let's try for the two minutes.  Look, what I want to say today is that social determinants of health really matter, and you can get those only by natural language processing of the chart.  So, if you've got high value predictions, you're going to have the stuff that we're talking about and talk more about, but you probably want to add the social determinants, as well.  You know, losing your job, becoming homeless, having marital conflict, imprisonment, and so forth, still matters.  I'm not seeing my whole slide here, by the way.  Anyway, so all of these are hidden in pretext notes in the medical record.  They're not in structured data, by and large.  You know, they're in social work notes, they're in emergency room notes, in the pretext.

How's this been done in past?  Well, there's several things that are too simple to work well, like bag of words or worded bags.  There's a -- structured ways that work well in medicine, like cTAKES; cTAKES -- excuse me -- clinical text analysis knowledge approach.  And then there's the new approaches that started two years ago, with the deep learning language models, BERT and ULMFit and so forth.  And those are far and away the best the way to understand sentences, right now, that's available, in any sort of machine learning.  And I think they'll become much more important in the things that we're going to be talking about today. 

The example that I wanted to give is David Brent, Rich Tsui, and I looked at predicting suicide attempt after an ER hospital visit -- a high value thing.  You do as best prediction you can.  Using all the structured data and looking, say, for example, at the seven day -- suicide attempt in the next seven days are really important when you've got somebody in the emergency room -- you know, do you hospitalize them?  Do you do -- give them a really near-term appointment?  What do you do?  We've got an AUC of 0.88.  As you guys, mostly, will know, and AUC of 0.10 is, you know, and oracle, the gods predicting.  An AUC of 0.5 is completely random prediction.  So, 0.88 is pretty nice prediction.  When we added the natural language processing data, it went up to 0.92.  So, that's a quarter of the distance to perfection.  You know, that's actually -- I'm sorry, that's 33 percent of -- so, going from 1.0 to 0.88 -- improving that by 0.04 is improvement of a third of the way to perfect prediction, from what was already good prediction.  The things that mattered in that one were marital conflict, imprisonment, family support, going into rehab.  So, lots of good and bad things that were not in the chart, were not in the structured parts of the chart, were in the pretext. 

Why is this hard?  What are the challenges?  Well, there's some technical ones, like completely anonymizing things, getting access, idiosyncratic abbreviations, local abbreviations.  But the really hard one is, is that the best techniques, the deep learning language models, are exactly two years old -- in the world, not just in psychiatry or medicine.  And, so, we don't completely know how to use those as well, and we don't have them trained up.  I think that, for other high value predictions, like readmissions, like starting to use -- starting to abuse opioids after you give them an opioid dose, like violence -- they'll be things where you want to predict as well as possible.  I think you'll want to add this part to it as well.  Thank you.

Andrew Krystal:
Thank you very much.  Our last panelist is Dr. Basit Chaudhry.  Dr. Chaudhry is an internal medicine physician and medical technologist.  His expertise is in the areas of health care payment, clinical service design, and the use of data analytics to improve clinical and financial performance in health care.  He works to develop more scalable ways of delivering health care, founded Tuple Health in 2013.  He's also been a lead research clinician at IBM Research.  Please, proceed, Dr. Chaudhry.

Female Speaker:
You may be muted.

Basit Chaudry:
Sorry about that.  I apologize.  So, again, thank you very much for the opportunity to have this really interesting discussion.  Just my background as a primary care physician, so a lot of my perspective on mental health is really from that context.  Much of my work now involves working with high complexity patients, and it's very, very common that they've been overlapped between they're medical conditions and they're behavioral health issues.  I think in many ways a distinction is very artificial, specifically given limitations and access to care.  Many patients, around the country have two psychiatry's, specifically. 

So, just in terms of kind of the vision -- I'm just going to try to read some of this, just to try to stay on time.  I kind of divided into kind of broad -- kind of broad lines, then more specific, related to techniques from machine learning and narrow AIs.  Like, kind of designate it here, and talk a little bit about challenges.  So, kind of broadly, just in terms of a vision, from what I see for mental health and behavioral health issues, broadly [unintelligible] learning a system for mental health, in which generalizable causal inferences deepen and broaden across the clinical enterprise, in a continuous, repeatable, scalable, and cost-effective fashion.  Here, clinical decision-making and process execution will be enriched by causal explanations, which progressively improve by feedback loops involving care delivery itself.  These feedback loops will be operationalized through technologies that systematize through quasi experimental methods and extend them into the clinical enterprise, democratizing the rigor from research -- the research setting into practice.  I think one part -- a major part of here, which I'm excited about is the way in which different kinds of insights can feedback into -- and generate new theories and enhance the redevelopment.  I think it's a major part of generalizing causal inference, obviously. 

Broadly speaking, these hybrid care delivery and quasi-experimentation platforms will address threats to validity through two primary mechanisms.  One is through structural design features.  I think, in many ways, is the discussion in health care, and broadly -- and slightly increasing the focus on data.  I think, in many ways, we are not focusing enough on structural design features to address bias.  How do we systematize those?  I think, secondarily, [unintelligible], provide different quantitative techniques and modeling methods, through which you can minimize the reducible error inside of models, related to picking different kinds of functional forms -- related to this within the context of, again, a narrow AI, so to speak.  And I -- my comments are outside of any kind of generalizable mentions of intelligence that can be [unintelligible] machines.

I think can, in theory, make meaningful advances across multiple fronts, particularly our counterfactuals, quasi-experiments, designing more relevant control/comparison groups, in rendering different kinds of functions -- mathematical functions -- for modeling purposes, that specifically change the variants-bias tradeoff that we're usually forced to make, much of which is just generated by advance in computational power. 

The challenges to this, I think, are quite significant.  I think one, broadly, right now, given that there's often not a definitive pathognomonic consolation of signs or symptoms for mental health issues.  That creates a lot of complexity in terms of data quality and data annotation for supervised learning methods and related techniques.  I think there's a very limited adoption of psychometrically validated instruments, particularly within the clinical enterprise.  And that really diminishes the quality of data we have.  Construct validity is, I think, an enormous challenge in this field, in particular, and something I get quite concerned about, because we often don't know exactly what we're modeling; broadly, data quality.  Clinical data modeling, specific to mental health -- issues of selection bias, omitted variable bias, all these things that we see when we work with existing data sets. 

So, that's kind of, broadly -- I think one other I just kind of mentioned, generating ground truth for algorithmic training is incredibly difficult.  I think it's going to be especially difficult in mental health, both in terms of its cost and complexity.  And, finally, I think I'll just mention, I think algorithmic interpretability -- we really want to change the clinical inference.  I think my fundamental assumption is that causal inference through explanation is critical to that.  So, I think the interpretability of what we produce becomes especially important.  So, I'll just stop there.  Thank you. 01

Sarah Lisanby:
I just want to do a time check.  We now have the remaining 15 minutes of this session for discussion to be moderated by Justin Baker and Andy Krystal, our co-moderators and also to remind the attendees you can insert your questions in the Q and A box, and Adam Thomas will be curating and bringing those questions to the panel.  So, handing to back to Justin and Andy.

Andrew Krystal:
Thank you, Holly.  We've heard about a number of different perspectives on what the future vision looked like -- looks like -- and it's clear that there are a lot of different components to this.  I'd be interested to hear from the panelists about their perspectives on the other talks or presentations they heard and how they impact on their view of the big picture down the road.

Brian Shiner:
Hi, this is Brian Shiner.  I talked about VA data.  I was curious, a few of the presenters talked about deeper learning approaches than we're using.  We're using basic tools. 

[inaudible commentary]

Sarah Lisanby:
[inaudible] please mute your phones.

Brian Shiner:
Okay, I'm sorry.  So, what I was just talking -- so, we're using more basic ML techniques like, you know, random vars [spelled phonetically] and lasso and things that have been out there for a while, and, kind of, the challenge for us for thinking about those deeper techniques is, kind of, managing the data is such a task in itself and getting the data clean.  Are you guys finding that you need to have less clean data for these more deep techniques or what -- how are you managing, you know, spending all your time on ML when there's so much data management to do?

Sarah Lisanby:
Would one of the panelists like to respond to that or one of our --

Mi Hillefors:
Please feel free to un-mute yourself if you want to participate in the discussion because you're all probably muted at this point.

Basit Chaudhry:
I can -- so, yeah -- you know, Brian, on your point -- just, like, I guess, from my experience, I think it's difficult to overestimate how important, essentially, the data management component of all this is.  I wish, obviously, that weren't the case, but I think, if anything, we don't pay enough attention to it often.  I think in areas that are very discrete, like computer vision and things like that where, you know, a lot of the focus has to -- just by the nature of the field of the domain, you know, people have not had to focus on that as much in a certain way.  And I think it's just very difficult within domain-specific applications, particularly ones which are knowledge intensive that data management and clinical data modeling I think just is a fundamental part of deriving, I think, better predictions and more importantly, I think, better causal imprints, which is what, I think, most directly patient care.

Neal Ryan:
If I could try a short answer to that one.  This is Neal -- whoops.  Clearly, for panel data, lots of people are going through extreme boosting which may be a little bit better than the random forest but is similar -- whatever -- the deep learning stuff which gets, you know, a lot geekier is valuable for imaging data or image-like things or long-time series data or time-series like things and for language, and, you know, probably almost certainly would be enough better that it's worth doing, but it does get more complex by a whole lot and doesn't, you know, doesn't -- none of it gets -- makes the data cleaning and preparation a bit easier.

Brian Shiner:
Thanks, those are great answers, thank you.

Sarah Lisanby:
And Adam Thomas, you may want to bring forth some of our questions from the attendees.

Adam Thomas:
Yeah, we've got a couple questions that I'd encourage folks to type anymore questions they may have into the question box.  The first one is focused on the data collection question that we need to be aware that the such useful data will need to be added to clinical patient meetings.  Patients already have a problem with clinicians' split attention between them and the medical health record.  Improving the medical health record is important, but is this problem on the radar?  This comes from Reed Polo [spelled phonetically].

Sorry -- I didn't decide who to direct that to.  Andy, do you want to take that one first?

Andrew Krystal:
I'm sorry -- were you directing that to Andy?  Is that what you said?

Adam Thomas:
Yes --

Andrew Krystal:
Oh, okay --

Adam Thomas:
-- or unless you want to re-direct it to another.

Andrew Krystal:
-- well, is there anyone on the panel particularly inclined to answer that?

Tim Mariano:
This is Tim Mariano.  I think I'll just say one thing very briefly.  I, you know, I agree right?  Adding burden not only to collection of the data, but also to the patient visits as well.  I mean, I think the last thing that most clinicians will want is another is another dashboard to check and review, you know, either before or during the patient visit.  So, you know, these tools, while great, also need to be minimally intrusive to the critical patient-clinician interaction, and whatever can be done on the back end to assistance that, I think, will be important.

Justin Baker:
Yeah, and this is just to -- I'll just add to that, I think, you know, one of the things we think about as treaters is there's multiple purposes to a clinical encounter, you know, currently there's an information gathering component to that obviously, but there's also relationship-building component.  And you asked, you know, what's the purpose of that relationship building, you know, partly, it's we know that the therapeutic alliance helps people by treatment and keep people sustained over longitudinal periods.  And so, I think it's going to be really interesting to ask the question of, for the amount of money we're paying for particular bits of information, what is the actual cost to information quality relationship for all the types of encounters or bio-markers we could be capturing?

We heard this morning from, you know, using EEG, using, you know, trails from the EHR which, of course, come from those visits.  We heard from -- we'll be hearing, you know, from other types of biomarkers, like wearables.  And, you know, you think about the information content of those things, they may be worth, so to speak, from that hour-long visit with the clinician, and yet, there's still some value to that relationship-building component, but it's not clear necessarily that we need to be thinking about the MD necessarily being the one in every case to maintain that relationship. 

But I think we're going to have to, kind of, radically rethink models that we provide care in where we can really balance information into prognostic value against these, kind of, tried and true methods of patient-doctor relationship and really, kind of, strategize now in the access-high and resource-poor environment where we have far more need that we will ever have treaters available to spend minutes with people.  So, I think there's -- we're going to be talking more about that future state.  Is it one where traditional psychiatrists providing that hour or where we have, sort of, networks of humans who can provide one-to-one patient contact and other methods to buttress, kind of, the information gathering piece?

Diana Clarke:
This is Diana --

Male Speaker:
Sorry --

Diana Clarke:
-- sorry -- this is Diana Clarke.  So, one of the things that I'd also like to add is I really do believe that one of the things that we need to do is to bring the EHR company to the table as well, because if we can't try to integrate all of these things with the EHR, we will always have the clinicians and, or the patients moving from multiple platforms to get the information that we need.  And these are valuable information to actually inform their treatment and patient care, but if we don't bring the EHR companies to the table and actually work on something so we can integrate these systems, we're going to always have this problem.  And so, we will always only get a piece of the data to do anything that we do, and we'll always be missing pieces of information.

Sarah Lisanby:
I'd like to second Diana's point there.  It's very important -- and a way of integrating the information from data streams into clinical care at the patient encounter, and we're delighted to have a couple of representatives from one of the EHR companies with us today from Epic who will be participating in session three.  All panelists across all sessions, please feel free to chime in during the discussion sessions if you'd like to make a comment.

Justin Baker:
Maybe it's worth it to go to [unintelligible] Doctor Laurie's question.  [unintelligible] you want to just go ahead and un-mute yourself and ask the question about, kind of, ensuring that all of these new innovations are able to make it across the digital divide?

Sarah Lisanby:
Yeah, and so the community may not be able to un-mute themselves.  So, we'll be --

Justin Baker:
Okay.

Sarah Lisanby:
-- reading questions from the Q and A box.

Justin Baker:
Okay, no worries.  So, yeah, all those questions, is any thoughts on ways we can assure that the future innovations are available to underserved populations such that the digital divide doesn't become a digital mental health divide?  And maybe I'll point that question at Amit.

Amit Etkin:
Yeah, so, thanks for the question.  I think that's a really important issue of these.  I think we often do look at things from the lens of, you know, research where we have a lot of resources and hospitals that do research tend to have a lot of resources and that's a major limitation.  I think as we think about what the tools are, we also have to think about what the tools are providing.  So, far, as we think about these digital mental health tools, there's yet to be really strong cases for what, you know, what they really add.  However, let's just think of a canonical example of a smart phone-based tool.  If it's something that becomes useful -- becomes useful in part because it saves money for health systems, and that's really the root to which -- root by which insurance companies and federal insurers will adopt things, because it ultimately leads to better care but saves money.

Part of that has to be thinking about the equipment and having accessibility is part of that.  That has to be part of that financial equation.  And I think as cheap as these things are becoming, it's now becoming quite feasible to provide smartphones.  There are programs for elderly and low-income people, for example, giving them smartphones with data plans.  So, I think that those things are overcome-able, we just have to think of that as part of the package exactly as you raised it.

Justin Baker:
And I think there's no question that at this time, even in the past two weeks, people who didn't have a cellphone or a tablet or some access to the internet, there's strong consideration of multiple levels to get all those new technologies into both conventional care settings, even inpatient units where cameras and other things haven't -- because of privacy concerns -- been the preferred mode.  But now, that's being considered which is exciting from the standpoint of accelerating the move towards telehealth type solutions that would work if all you have is a device.  It's also rapidly raising some of the ethical considerations.  So, excited to hear more about that later today.  We have about four more minutes --

Diana Clarke:
I was going to just say something really quickly -- Diana Clarke.  Yeah, this is really, very, very important to me, and one of the studies that we're currently doing through the APM mental health registry, which is a quality measure development project.  And one of the things that we do find is that a lot of the -- a number of patients don't have access to the internet.  So, they're not able to participate.  But we also have to remember our patients as well and realize that their level of comfort with technology is also a barrier that we're going to have to overcome.  And so, how do we reach out to patients -- mental health patients -- speak to them in language that they can understand so they can feel comfortable using it?

So, it's not having access to the internet, but it's also patient's basically saying, “I don't do the internet.  I don't complete anything that has any access to anything.”  And so, how do we overcome that barriers so that the information and the data that we're collected [sic] is really generalizable to all the patients -- all patients with mental health issues -- mental and substance use issues?

Andrew Krystal:
I just have a quick question for the panel.  The presentations have all described a number of different types of data and that are likely to be important in the future vision and from lots of different sources.  I'm interested to hear what people's vision is about how integration could happen?  Did people see the -- let's say, the NIH having a role in this as somehow being a clearing house for data and predictive algorithms?  Appreciate your thoughts on this.

Justin Baker:
Yeah, I'll say just maybe open it up since I hear we have just a couple minutes left, but to ask, sort of, more broadly, what's going to be the role of industry versus NIH in, sort of, maintaining those large data sets because I think there's, you know, there's lots of companies whose job it is to maintain large data sets.  It's not traditionally the domain of health care, although I'd be curious what panelists think about maybe in five years or less, what is going to be the distribution of responsibility around these issues?

Sarah Lisanby:
If you're speaking, you may be muted.  I was wondering if Basit might have a view on that question.

Basit Chaudhry:
-- mute -- sorry, I had to change locations.  But yeah, I think if you look at, for instance, some of the, kind of, different techniques we've talked about from machine learning and different forms of quantitative techniques, like things, like, deep learning, you know, the advances made in those fields have been driven largely by two things.  It's interesting.  A lot of the algorithmic work in that is -- in those areas -- is quite old.  I mean, dating back to, like the 1980's.  The advances that recently made that a lot of people do the kind of sophisticated, say, natural language processing that we talked about before really falls in two things.

So, one, it's the implementation of computationally intensive graphics chips, and then, two, large corpuses of well-annotated data.  So, for instance, in computer vision, one of the major reasons why so much progress has been made is the data available through ImageNet.  And so, without high-quality corpuses of data that are well-annotated, like, I can't focus -- or emphasize that enough.  It is very difficult to train algorithms in sophisticated ways for domain-specific applications.  So, again, a lot of the innovation we see now is literally related to data available through ImageNet that's been annotated by humans.

Sarah Lisanby:
So, --

Justin Baker:
Wrap up quickly -- is that okay --

Sarah Lisanby:
-- yes, go ahead.

Justin Baker:
-- thank you, Basit.  And I completely agree that annotation is really one of the next key roles for health care in terms of thinking about the of [unintelligible] going forward.  If you can't measure it, you can't manage it.  And we're seeing maybe an influx of a measurement culture in psychiatry which I think is a really good thing, and I think it's been present in oncology and every other branch of medicine.  It think what we're all talking about today is this new contrary of data types and willingness to measure things in psychiatry that hopefully in five, ten years, or maybe sooner, will allow those learning systems to begin to churn on that data and iterated and develop novel therapeutics. 

So, thanks, everyone, for providing their exciting visions for the future of the field.  Thank you also to my co-moderator, Andy Krystal, and all the panelists for taking the time as well as everyone listening.  So, I'll turn it back over to Holly who can take us to the next panel.


Sarah Lisanby:
Thank you very much, and I think that's a good segue.  We've -- now that we've talked about where we'd like to go to achieve a health system that's able to provide effective mental health care for all patients wherever they receive their care and one that takes into account the social determinants of health.  The next section, we're going to look at where we are now.  What can we learn from existing networks, and I'd like to introduce our moderators for this session; Guillermo Sapiro, is the Pratt School professor, engineering at Duke University, and he works on topics of theory and applications in computer vision, computer graphics, medical imaging, and machine learning.

I’d also like to welcome Dr. Helen Egger, who is chair of the Department of Child and Adolescent Psychiatry in YU, and an innovative researcher employing new technologies to advance mental health care, especially for integrative pediatric mental health.  Dr. Sapiro, please take it away.

Guillermo Sapiro:
Thank you very much, Holly, and thanks everybody who are participating.  As you can see, we have a very prestigious team of speakers and discussants coming up in the next 45 minutes.  We have five minutes for each speaker, that will take us about 30 minutes for the first part, and then 50 minutes for discussions.  [inaudible] you see, every speaker, which takes a bit of time.  We'll let them use all the time.  You can see they're very prestigious speakers that we have now basically on this slide.  They're very diverse and each one of them is a renowned expert in the field, and they come from multiple institutions.  So, we are -- as I say -- we are not going to use up their time to present their bio.  So, we can give them the time to keep us and to educate us.
We also came with a few guiding questions with Helen that Helen is going to present next before we move to the speakers.  Thank you and stay safe.

Helen Egger:
Thank you, Guillermo, and thank you to Holly and the rest of the organizers for including us in this wonderful conference.  We had some questions that we wanted people to think about as they were listening to each of the presentations, and many of them have already come up in the first session.  So, we put them in, sort of, three buckets.  The first is around these questions about data acquisition and curation or the structural design features and data management; so, where does the data come from, how is it measured, who are the subjects, and importantly, I'm speaking as an epidemiologist, what is the sampling frame of the subjects who participate?
 
Then, issues around data combination, integration, and also was brought up, data management.  I think the question of how we combine multi-modal data, in particularly over time, is a key challenge, how we deal with missing data.  And then, how do we test the clinical significance of our results in the population we created it for, but then in other use cases, and particularly, it came up before as an interest of mine around health disparity populations.  Then lastly, the huge challenge of bias in -- which comes in many ways.  We give one example here about how often the mental health information is incomplete in the EHR, and there are many other examples.  And then, always the pesky question of how we distinguish between correlation and causality.
So, that's the frame, and now, I'm going to hand it back to Guillermo to start our speakers.

Guillermo Sapiro:
Thank you very much, Helen.  And we -- as we say, we have five minutes per speaker.  So, Aziz, you are first, and we're -- hopefully, we'll not have to give you the five minutes warning because you're [unintelligible] -- I apologize in advance if we do.  Thanks.

Aziz Nazha:
Thank you very much.  So, thank you Dr. Lisanby, Dr. Gordon, and thanks for all of you for the opportunity to be here with you in such a great workshop.  So, I'm an oncologist.  I treat cancer patients.  My specialty is treating a blood form -- a form of blood cancer, Myelodysplatic Syndromes, and I also direct the center for AI.  So, we do some -- use AI and machinery in deep learning to try to solve some of the challenges that we have clinically.

So, just to give you a quick background about the disease.  This is a form of blood cancer that happened in the blood [unintelligible] diagnosis about 10 [unintelligible].  When we make the diagnosis, actually by doing the bone marrow biopsy and try to put the – an experienced amount of pathologists can look at the slide and give us the answer whether the patient has Myelodysplatic Syndrome or another form of blood cancers.  And then the next step of finding cancer is to, kind of, stage the disease, and this is, kind of, a blood cancer so we don't do CT scans.  What we do -- we just take the blood count, look at the patient's chromosome, block them into scoring system, and then come up with the risk score [spelled phonetically] that this patient is a higher risk of progression to advance stages and lower risk.

And this is a very important step because our treatment guidelines are based on risk.  So, if patients has a higher risk, we tend to give them more aggressive therapy compared to a patient with lower risk.  So, what we found with -- you see on the slide -- several of the challenges that we have clinically, and then the question can resolve these challenges using machine learning and deep learning.  So, one of the biggest challenges is the diagnosis of the disease.  It is very subjective.  So, about a third of the slide -- of the bone marrow slides -- come to us from the community oncologist.  They have misdiagnosed the disease, whether they thought the patient may have it, but they don't, and vice versa.

So, what we have done so far, we took some of the pathology slides and using computer vision, we've been trying to say, is this MDS versus not MDS?  And we have very exciting results that's not shared yet.  But the other project that you can see on the lower slide is looking at the blood count and the genomic signature from the patient, can rebuild the machine learning model that can differentiate MDS from other minor malignancies?  We have done this project where we actually compared MDS to other -- what you see here -- other minor malignancies look exactly like MDS.  What we're able to find, actually, we build machine learning model that achieved about [unintelligible] under a curve of 96 percent, differentiating MDS from other disease just using clinical variables and mutation data.  So, you could envision a future of not doing a bone marrow biopsy. 

The other thing, that in terms of prognosis, what we found that the -- what we are predicting in terms of this staging system, what the actual survival of the patient is completely different.  So, we have used machine learning to take clinical data and the mutational data and built this model that can provide personalized prediction.  And if you can select here -- this is supposed to be a video -- if you can click on the -- actually play, you can see me here changing some of the clinical variables of the patient -- make the patient actually worse, and then you can see this, sort of, the viable probability of the patient, it changed.

So, this is just to talk about, you know, personalized prediction and data specific for a given patient.  And finally, they're used also some machine learning in technologies trying to predict response to chemotherapy.  So, one of the chemotherapy that we give to our patients is called hypomethylating agents.  Today, we don't have any models to give us an idea which patient would respond or not, and it takes about six months of patients to achieve a response.  So, one of the projects we did, we took actually the recommended system algorithm, saying if you watch this movie, this movie, this movie, you're going to watch this movie, you [unintelligible] you're going to want -- you're going to respond or not respond.  Based on that algorithm, we developed genomic biomarkers that can predict resistance in about a quarter of the patients, but when they present, their accuracy of prediction of resistance is about 93 percent on an independent validation cohort.

This was also validated in the lab where we used the CRISPR task technology to knock out those genes and we generated those cells in the lab.  And finally, one of the projects were working on now is using time series analysis.  So, sequence in sentence matter.  When we give chemo, the blood counts change over time.  We monitor the change in the counts over time, and we predict response or no response.  We were able to build a model that actually just monitoring the changes in the blood counts on the CBC.  We were able to predict the patient was going to respond to chemo or not with 83 percent area under the curve.  So, these are some of the examples that we have used different type of technologies and machine learning and deep learning and different domains trying to improve our tools knowledge and clinical decision-making tools that help our MDS patients.

Guillermo Sapiro:
[inaudible] Aziz and thank you all for keeping the time.  The next presenters are going to come in tandem is Greg Simon from Kaiser Permanente and Rebecca Rossom from HealthPartners.  So, they're going to take it one after the other, 10 minutes and -- hey, Greg, please go ahead.  Thank you very much.

Greg Simon:
Actually, Rebecca's taking this first part.

Guillermo Sapiro:
Oh, okay.  So, then -- perfect -- thank you.  Thanks -- it's your 10 minutes, your -- you two together will be within the next 10 minutes --

Rebecca Rossom:
[laughs] Okay.

Guillermo Sapiro:
-- [inaudible] --

Greg Simon:
Okay.

Guillermo Sapiro:
Thank you.

Rebecca Rossom:
We'll do that.  So, this is Rebecca.  Greg and I are from the mental health research network, which is a consortium of 14 research centers embedded in large comprehensive health systems.  Together, we serve a population of about 25,000,000 people across 16 states.  And, sort of, related to the conversation earlier, we all do use Epic for our electronic health records.  So, we each maintain a data warehouse following the Sentinel, PCORnet common data model and consolidate our data that includes electronic health record data, insurance claims, pharmacy dispensings, patient reported outcomes, lab results, demographic characteristics, insurance covering characteristics, and mortality when we link up with state mortality records.

So, using these data sets then, we are --- have worked -- done some work to develop suicide risk prediction and really have worked to predict suicide risk accurately and relevant to the point of care that we can estimate risk for a specific customer.  So, we have developed validated models predicting suicide attempt and suicide death after mental health specialty or primary care visits, and now, we're working on developing models for ED visits for patients after inpatient stays.  Next slide, please.

So, for this model, we used 25,000,000 visits for 4,000,000 patients and developed models that have an overall accuracy or area under the curve of 84-86 percent.  So, that's what we have.  What we'd like to see is more accurate coding of self-harm events to the discussion previously.  You know, we recognize the models are as good as the data we can put in it, and the more we can minimize those false positives and false negatives, the better models that we'll have. 

We'd also love to have better up-to-date social determinants of health, so things that aren't routinely collected in the EHR or are not routinely updated but can have significant prediction for suicide attempts; things like job status, relationship status, food and housing security, for example.  What we don't think we need is more complex or opaque modeling strategies.  You know, we have an evaluated alternative modeling approaches, including classification or tree-based approaches, classification and regression trees, mixed effect regression trees, random forest plots, but honestly, so far, we found that the simpler models are nearly as good.  We also don't think we need new model development for every healthcare system, and we're not letting that -- the perfect be the enemy of the good. 

In our care systems -- mine and Greg's -- our suicide risk model has already been implemented and that happened within about six months of the publication.  In my healthcare system, by the behavioral health case managers in our health insurance are in Minneapolis and within about nine months in a pilot behavioral health clinic at Kaiser Washington in Seattle.  Next slide, please.  Additionally, a tribally-owned and operated healthcare system serving over 65,000 Alaskan Native people in Anchorage in 55 real villages took our model predicting suicide attempt after a primary care visit and re-validated it in a sample of about 50,000 visits in their clinics, and this table shows results from that re-validation.  And you'll see that the positive predictive value is a bit higher reflecting a higher prevalence of high-risk patients while the sensitivity was a bit lower, overall suggesting good usability in a very different system.  And this healthcare system is now also developing an implementation plan.  So, I'll turn it over to Greg.

Greg Simon:
Thanks so much.  Could you go on to the next slide?  So, I'm going to shift a bit and talk about some of the work we're beginning regarding developing personalized treatment pathways for depression, and by way of introduction, I should say that Roy Curlis [spelled phonetically] and I -- literally 10 years ago -- wrote a paper about what we needed to develop personalized treatment pathways for depression, making the point that we thought traditional randomized trials were not getting us there and would not likely get us there, and we said what we really need are comprehensive data on treatments and outcomes coming likely from large healthcare systems where we could have the data volume necessary to answer some of these questions. 

If someone had told me 10 years ago, “Greg, 10 years from now, you will have made almost no progress on personalized depression treatment, but you will have accurate predictions of suicidal behavior,” I would have laughed out loud.  I would have thought that the likelihood of progress in those areas was exactly the opposite, but that's where we are today.  So, a few words about, sort of, you know, maybe why it's been a bit slower getting started on the personalized depression treatment.

So, what we really want are -- oops -- back to the previous slide -- okay.  So, what we want.  We would like patients and clinicians who are selecting treatment for depression to have accurate and constantly updated personalized predictions, essentially to say for someone with your characteristics, those characteristics being your own patterns of symptoms, patterns of co-occurring conditions, your history of response to various treatments, and even other, sort of, omit data that might soon be available in electronic health records.  This would be the next step for you, and I'd say where we are now is we're just getting started.  So, maybe you could go one the next slide now.

So, in terms of what we have, you know, just looking at, say, six of our health systems that have, I'd say, made the most progress in terms of systematic collection of outcome data.  We have a database that includes about 1.4 million people who've received some form of therapy for depression just in one year, in 2019, where we have comprehensive data about their exposure to medication treatments for depression, and this is, you know, come online within the last two to three years, relatively complete data regarding outcomes.  So, the database regarding outcomes comes in those six healthcare systems includes longitudinal data about 3.6 million PHQ-9 depression score observations for about 1.1 million people.

What we find in these healthcare systems is people initiating treatment typically have a standard, typically the PHQ-9 depression score recorded about 85 percent of the time and have outcome scores recorded typically 60 plus percent of the time.  The failure to collect outcome scores is, as many of you know, not because people are seen and outcome is not assessed, but because people drop out of treatment. 

So, what do we really need to move forward on this?  Interestingly, we have very good data on outcomes.  We have really poor data on adverse effects of treatments.  We may know that medications are discontinued and even when we try to mine text data to find out why and what particular adverse events people experience, they're poorly recorded.  Although we can characterize pharmical [spelled phonetically] therapy fairly accurately, all we can say is psychotherapy is that a visit was occurred -- visit occurred -- and, to be honest with you, a bill was paid.  We can say very little about the actual content. 

We would really like to have more useful data regarding the logic of treatment decisions, that may sometimes be recorded in free text, but to understand when a new medication was prescribed, was it prescribed because of adverse effects?  Was it prescribed because of lack of effectiveness of a previous treatment?  We'd really like to see that recorded in s systematic way.

And then, the analytic challenge here, I think, is different from, you know, the challenge of suicide risk prediction where we really can use, I think, fairly standard and well-established methods for a “yes, no, discrete outcome” and a fairly well characterized vector of features or predictors.  Here, we're really trying to discover treatment response phenotypes.  We're trying to understand what patterns actually exist in nature of responses to different treatments.  The number of patterns is quite large especially when we consider the possibility order effects, in what order people were exposed to different treatments may matter or predict in terms of subsequent outcomes.  And we really are moving just from prediction into causal inference. 

So, we're having to combine a, sort of, pattern discovery activity with a causal inference activity.  That gets fairly complicated, and that's something we're just starting to think about.  We do have a couple people in our network who are working on these methods, you know, quite actively right now.  So, that's my sense of, you know, maybe why we're not quite as far along in the personalized treatment, but what we're hoping to develop.  I'll stop there.  Thanks.

Guillermo Sapiro:
Thank you very much, both Rebecca and Greg.  And we are going to move to the other side of the ocean to Matthew Hotopf to talk --

[audio break]

Matthew Hotopf:
-- and the idea of this is really to try and get a handle on people's 360-degree view of people who experience depression and try to predict depression outcomes using mobile technologies.  If you look at the diagram at the bottom of the slide, the current state of play in most health systems and most research is that we get snapshots.  We get snapshots of people over the course of a long period of illness -- depression is a chronic disorder -- it relapses and remits -- and we simply have these little snapshots in time, usually only when the person's got a diagnosed disorder not before and usually those snapshots are entirely dependent on them being unwell and seeking treatment.

In, you know, the challenge for mobile health is to be able to provide a continuous, pervasive stream of passive data as well as some active data which can give a very well-rounded indication of many of the higher functions of the central nervous system including sleep activity, sociability, cognition, speech, and mood.  And we do this using a platform we've developed in RADAR-CNS which both takes passive data from a smartphone, so it'll include things like GPS.  Also, wrist-worn wearables, in fact we're using a Fitbit in both this study to get actigraphy and also, active tasks, questionnaires, little bouts of sampling, and cognitive and speech tasks.
The RADAR -- so, RADAR's actual active in three diseases.  We're doing this in epilepsy and MS as well as in depression. 

But for the depression study, we're just about at the brink of having reached our recruitment target of 600 participants followed for up to three year over this time with data coming from Android smartphones and Fitbits with intermittent web-based interviews and also, more detail across a, sort of, qualitative interviews on their experience of the study.  We -- so, the aim of the study is really to try and identify bio-signatures which are indicative of relapse.  But, it's a very naturalistic study.  It's a study which is taking people of all different stages of depressive relapse and remission and following them over time in as unobtrusive way as possible.  Next slide, please.

So, next slide, please -- yeah, great.  Okay, so, I was asked to, sort of, highlight some of the questions and issues and how we've tackled them.  We've been running -- I guess we've been data collecting for about two years or so, and the first question -- I think it's really important actually in this whole seminar -- I haven't heard enough of this, about the patient -- the participant -- and their experience because all these technologies aren't profoundly  -- they require a sent or consent for data to be used.  They have indications around privacy.
 
In the RADAR-CNS program, we're asking people to give us GPS data, which is obviously highly personalized.  We do that by obfuscating it, but still, the issues around privacy and acceptability are critical.  So, we have had a really serious, intensive periods [sic], and it's ongoing in the study of public inpatient involvements and a very active patient advisory board.  We have a lot of qualitative work.  We did a very, sort of, detailed workshop with patients and participants about what they would tolerate in terms of device selection. 

So, we're asking them to wear a device for two or three years -- what matters to them and what matters to them might be quite different to what matters to researchers.  So, in fact, we went to a consumer-graded device rather than a research device because we wanted to make surer that participants had something which didn't feel stigmatized, that was easy to use, the batteries -- the battery life was reasonable, and so one, whereas the research-grade devices where we might have been able to get raw data off the device, had more difficulties in those respects.

There's also a need for technical support, and we -- and I actually, in the mobile health field, certainly when we were starting, it was actually quite difficult to find a platform which did the job for us.  What we wanted was something which was device diagnostic so that you could plug and play new devices into it.  It wasn't, sort of, caught up with a particular manufacturer's device.  And we built this -- we also, I think, worked -- made the decision to make it open source.  By doing that, you actually make it far more sustainable.  It doesn't just last for the length of the project.  It actually has a life beyond that.

So, there is -- if you want to look it up -- a platform called Radar Base, which is used to being, sort of, the study.  There's also -- I put there, technical stuff happens because I actually -- when you're doing this, you're not in control of a lot of technology.  During this time, Android changed their operating system.  It stopped us from being able to do some things which we were taking for granted that we could do.  Suddenly, notifications weren't getting out to our participants.  And you have to have a very, kind of, nimble way of identifying problems and fixing them in the middle of a study in a way which most studies are wrote on the plans and protocols of -- its just not something you normally experience.

Another issue is future adoption.  So, we want this to get out into the real world if we demonstrate something beneficial, something which is actually likely to be useful.  And having early engagement with regulators as being really important.  So, we have had conversations with FDA and EMA about how this would work out.  But in terms of its use utility and drug development would also be on that in terms of something which could be integrated into health systems.

And finally, data analytics, and this is work in progress.  I am an epidemiologist who's used to doing survey data.  I've never had data the scale of this, the change of this, and, you know, the varying temporal resolutions and the scale of data is absolutely enormous.  So, these are huge challenges, in particularly the issue of data missingness because participants will drop out of this study and draw back in during the course of the treatment.  That data missingness may often be actually quite informative.  It may be the first thing you do as you start getting depressed is you stop answering the questionnaires which we're posting out.  Thank you very much.

Guillermo Sapiro:
Thank you very much, Matthew.  And we are coming back to this side of the ocean with Peter.  Peter, please go ahead.

Peter Zandi:
Thank you for inviting me to come and present.  I'm presenting for the National Network of Depression Centers and our effort to establish a learning health system for mood outcomes -- for mood disorders, bipolar disorder and depression.  You know, the last five years, we've been working on building the infrastructure for this learning health system.  So, we're not quite there at the point where we're starting to analyze data and apply some of the methods that we're seeing here, but we're hopefully getting close to being at that position.  And so, what I want to provide is a high-level description of the stuff that we've been working on.

Really, the NNDC is a consortium of around 26 academic centers around the country that provide care for patients with bipolar disorder and depression.  The goals of the consortium are to advance care -- patient care -- for these patients to promote research and to promote education around the mood disorders.  So, we've been working on establishing this mood outcome program, which is our flagship initiative for trying to create this learning health system for mood disorders across the country -- across these 26 academic centers. 

The Mood Outcomes Program -- there's really four goals -- for the Mood Outcomes Program.  The first really is to implement this idea of a measurement-based care program as standard of care across all our centers.  There's a lot of evidence that measurement-based care works and is used for patients, but there's reluctance to adopt --

[audio break]

-- reporting requirements from CMS, like MACRA.  The third goal of the Mood Outcomes Program really is to facilitate this collaborative multisite, prospective research in these real-world settings across these academic centers.  And the fourth goal really is to be able to take those findings from these efforts and be able to translate them back [inaudible] care for our patients across each of these centers that are participating in the consortium. 

So, in this way, the idea is that this Mood Outcomes Program can serve as an engine or driving the adoption of this learning Health System across each of the centers that are participating in it.  So, he's been working on this initiative for the past four or five years to try to develop the infrastructure to support this program and to help us to get to that goal, and we've made considerable progress over those four years.  And, you know, the first step was to agree on a standard set of measures -- patient-reported outcome measures -- that we wanted to capture from our patients at every clinic visit, so that it was, like, capturing vital signs from these patients as they are receiving care in our centers, and that the clinicians would use that information as part of improving care.

So, we went through a process of working with each of the centers, and we agreed on pre-measures that capture at every visit the PHQ-9 which we've heard about from other centers, the GAD-7, and the Columbia Suicide Self-Rating Scale.  And then, we developed a IT infrastructure for patients to complete these measures across each of the centers and then for the results of those scales to be viewed in real-time by the clinicians, really be used for measurement-based care, not only for screening, but for continual tracking of the patients as they are providing care to the patients.

So, we've gone through a pilot phase where we've implemented this program at eight centers across the country.  We've enrolled 10,000 patients, and we just finished that pilot program, and we're very encouraged about it.  And we're now at the step -- we're going to be trying to disseminate this program the rest of the 26 centers across the consortium so that we we're disseminating it more widely.  And we also have been involved in an initiative working with Epic. 

Over two-thirds of the centers are Epic -- use Epic -- as their local electronic medical record so that we can pull other data from the electronic medical record including counters, problems, diagnoses, medication, procedures, and we can pull that data with the patient-reported outcome measures -- pull it into a central repository where we can then use that as a platform for the types of work at we want to do, whether it is quality improvement or these large-scale, multi-site research projects where it's analyzing secondary analyses, exiting data, or to embed studies in these real world settings whether it's a comparative effectiveness study or a pragmatic clinical trial.  So, we're in the process --

Guillermo Sapiro:
Sorry, this is Guillermo and I truly apologize to Peter and to Scott that is coming next, but I just got a call from NIH saying that we're, like, five minutes behind.  So, if -- I apologize, but if we could cut a few minutes from each appeal, I apologize.

Peter Zandi:
Sure, sure.  So, I'll finish real quickly -- sorry for going long.  One thing that I think is interesting as we've leveraged this program that we've established just recently to engage and initiate a large-scale study of ECT across our centers.  We've implementing -- we're implementing this program in our ECT centers across these academic centers that are participating in the program.  We're harmonizing how we're documenting ECT, and what we're going to be doing is as our patients are receiving care through ECT, we're going to be collecting biological sample so that we can carry out genetic studies, look for genetic markers, severe treatment-resistant depression, and response to ECT both in terms of efficacy and cognitive side effects.

And our hope is that we'll be able to use that knowledge that we gain from that large-scale study which we're doing in collaboration with the PGC, Psychiatric Genomics Consortium, to identify markers of who are good candidates for ECT and who might respond well that would then feed into these, sort of, machine learning and AI algorithms that we're talking about.  So, that's just the beginning of the sorts of projects that we think will be really exciting that will be able to build on top of this program.

The next slide laid out a couple of challenges. But I think that they've all been, sort of, talked about in one way or another from other folks.  But I think, in the interest, Guillermo, of the interest of time, maybe I should stop there and pass it on to the next person.

Guillermo Sapiro:
Thank you very much, and again, I apologize.  We could hear and listen to each one of you guys for an entire day, but we have a busy agenda.  Scott, you're next.  Thank you very much.

Scott Kollins:
Yeah, yeah.  Can you hear me okay, Guillermo?  I know there's --

Guillermo Sapiro:
Yes.

Scott Kollins:
-- great.  Well, thanks.  I think I should be able to keep this brief, you know, and I want to talk about a program of work that is relatively early days, and so, I don't have as much in terms of data and results to present.  But, just to talk about a couple of, sort of, main problems or questions that we're working on.  I'm a clinical psychologist and professor in the department of Psychiatry at Duke, and I'm also the director of the Duke ADHD program.  And we have been, sort of, thinking about this for a couple years along with myself -- along with Holly -- including Guillermo as well as Jerry Dawson and others here at Duke.

One of the things that we've recognized is that child health and a child mental health are, sort of, relatively understudied and under-addressed in terms of the applications and data science to improving these conditions, and being trained as a child psychologist, this is, sort of, a nice place for me to start thinking about how to use some of these methods to improve the lives of kids.  So, we're working on a few things.  This is funded in part by support from NIMH, as well as an ICHD and some internal support [inaudible] kind of, two basic questions.

First of all, can we improve the way that we surveil risk for both ADHD and autism as well as other more general neurodevelopmental problems using primarily EHR-based risk prediction models and taking information that is available only through the EHR applying machine learning, natural language processing, and being able to identify kids earlier than they would otherwise be presenting for an assessment or a diagnosis of autism and ADHD?  And then, the second part of this work is prospective, is to validate the models that we are able to build and then see if there's additional information that we can gather that would improve or refine the models by performance and that includes both sophisticated computer vision-based tools that Guillermo and others are working on as well as the addition of other patient-reported outcomes and surveys and things like that. 

Part of the platform to do this prospective validation through an Autism Center of Excellence that Jerry Dawson and myself are PI's of.  One of the projects in that Center is we are ascertaining all kids that come through Duke Peds Primary Care starting at 18 months, and we're able to track them, see who ends up developing both ADHD and autism, and then use additional questions that are part of the study, additional surveys and so forth that are part of the study, to see if that improves model performance.  You can go to the next slide, please.

So, you were also asked to talk a little bit about successes and challenges so far.  So, in terms of our successes to date, and again, this is relatively early days -- we've been working on this for about a year to 18 months.  We've had had both institutional as well as NIH funding to support this work which we are grateful for, and we've been able to successfully extract and curate a fairly large dataset from the Duke Health System, about 30,000 patients, and started to work on the modeling using the EHR.  And so today, we've got -- we actually have a decent baseline model that wear continuing working on to refine, and we've already got a -- we've got a paper under review now, one of our fellows that is showing significant differences between kids who are later diagnosed with autism or ADHD in terms of their healthcare utilization as measured through the health record.

We've had some challenges, and I -- this isn't any surprise to this audience -- the complexity of the EHR data is profound, and we've spent a lot of time working through that.  We know that a lot of the important risk factors that are in the health record or part of the mother’s health record and even the father's, and that has been a challenge that we're working through to be able to link those data to refine the model performance.  And one of the big ones that we're starting to really think about and, you know, having heard some of the other presenters, it's exciting to hear the other, sort of, precedence in this area, but for us to think about, what do we do when we get a good model and how are we going to integrate that into primary care.
We're just now starting work to understand different stakeholders’ input on how they would use or think about risk if it's identified early.  So, what do you do with a model that says you've got an 80 percent chance that your 12-month-old will later be diagnosed with autism and ADHD?  There's a lot to understand about how primary care pediatricians as well as patient caregivers -- how they're going to be using that information.  And so, we're really interested to start to understand that better so then we can start to develop interventions and deploy them through the health record and other places.  So, I think I'll stop there.  Thank you.

Guillermo Sapiro:
Thank you very much.  Thanks to all the speakers, and I will pass this to Helen to moderate the discussion.  Thank you everybody.

Sarah Lisanby:
Helen, you may be muted.

Helen Egger:
I now am un-muted, thank you for the reminder.  You think we do this every day that we would remember that.  But thank you so much for those great discussions, and now, we actually have four discussants who are going to make some comments on exciting projects that were presented.  One of the things someone said about patient engagement and burden -- one thing I haven't heard that I want us to be thinking about is the importance of human-centered design, and not only, sort of, the content of our tools, but how we present them and that those are going to impact not only our results, but also our ability to scale our tools to be used in actual practices.  So, now, I'm going to turn to Laura Germine, it was from McClain.

Laura Germine:
Can you hear me?

Helen Egger:
Yes, we can, yes.

Laura Germine
Great.  Let me know if I start to fade out.  So, I absolutely agree, you know, to the discussion about patient-centered design and thinking about what the role of the patient is in patient programs.  I think a critical piece that we often have as a footnote or a future direction or any piece in all the discussions about clinical applications of machine learning and AI, but I think that, you know, one thing that is, I think, a challenge for the field is how do we put that in the middle of the project?  How do we think about not just, you know, talking to patients in the form of, you know, patient [inaudible] which I think is really critical and really important.

But, you know, how can we engage patient communities to actually be a part of the process of developing algorithms?  So, you think about how many really smart data scientists and engineers there are out there who are part of -- maybe they're patients themselves, maybe they have family members who are patients.  One of the things that's been striking to me about this situation we find ourselves in with COVID-19, the pandemic, is how much people who are not disease experts have actually contributed to our thinking about prediction about what's going to happen, and what are the right public health interventions, you know.  Seeing the collaboration between epidemiologist and everyday people who have good skills in these areas has been really remarkable.

So, how do we think about engaging patient communities in the process of algorithm development is one point.  And the second point that's related is, you know, where does open data fit in here?  So, we worry about privacy, but what about sharing?  Are there conflicts between the incentive structure around IP and around, you know, future commercialization?  And this incentive structures of science that are really preventing us from having data be shared in a way that would be maximally -- would advance scientific progress.  So, those are just, I think, the two points that I would like to have us talk a little bit about, at least bring it to the discussion.

Helen Egger:
Those are fantastic, thank you.  I think we're going to go through each of our discussions to give these short responses and then address questions that are coming through.  Roy Perlis from Partners, can you go next, please?

Roy Perlis:
Sure.  So, I really -- I enjoyed all the presentations.  It's nice to hear everybody pulled together in one place.  So, a few years ago, I was presenting some of our natural language processing or doc hocus pocus at Broad and one of my colleagues leaned forward and said, “So, is this a problem of science or engineering?”  And I think that's a critical point for us to keep in mind is that much of what we're talking about it's not that the science isn't there, but it's really a question of engineering in the sense of, you know, we have as Greg and others mentioned, we have models that by objective performance metrics look very good, and so, I do think one of the things we have to consider is why has there not been more dissemination? 

We and others have been doing this kind of thing.  I published a machine learning for treatment resistance prediction model eight years ago, not replicated, not used, and so I think one of the big challenges that we should be explicit about that people may not be comfortable acknowledging is that Epic is the 800-pound gorilla and Epic is a massive barrier to progress and dissemination.  So, it's wonderful that we all have -- many of us have access to this sort of common data model, but it's also a huge challenge when Epic owns, by contract, any code that touches theirs, and it's just very hard to build on top of it. 

I want to mention a few sorts of specific things.  So, the genomic network had an RFA out for machine learning papers, and so I've probably seen over the last year a hundred plus psychiatry, machine learning clinical application papers.  I wanted to just highlight three things we need to talk about besides sort of engineering.  One of them is, I don't care about your AUC, and I don't care that your AUC [unintelligible] data too, especially for rare events.  And so, I think it'd be really helpful for the field if we focus on PPV and NPV and another sort of clinically interpretable parameters, as well as calibration, right, so how well can I stratify risk? 

I think we need real [unintelligible] validation.  What does your model look like in another health system?  So, our standard has become, you know, cross-validation is great, but how does it do across the country or across town, and that I think leads into the final point, which is we have to get past this sort of Groundhog Day approach to machine learning where everyone presents their model as if there were no prior art. 

You know, again, we're now five to eight years into this era when lots of us are building models, and yet it's always the first-time other models, so you know, significantly exceeds chance.  And so I think the last point I would make is, first of all, the default should be if you're presenting a new model you don't just compare it to chance you compare it to the one that the folks down the street or across the country built and I'm still not seeing that for things like suicide prediction.  And also, we have to compare it to actual clinical practice and sort of what are [unintelligible] not just chance because we hope clinicians do a little bit better than chance, so we need to think a little bit about how we evaluate our models.  So, those are some challenges for the very smart folks on this call I think to consider in, how to get to dissemination.

Helen Egger:
Roy, thank you so much.  Those were excellent and embracing comments that I think really frame the things we've heard and will be hearing.  I'm going to turn now to Tanzeen Chowdhury from Cornell.

Tanzeen Choudhury:
Thank you.  So, I wanted to kind of make three points and one is around the kind of patient-centered or user-centered design.  I think we have a broken model of chasing after an engagement.  We talk about how do we engage patients, how do we engage doctors in a lot of the measurements that we can collect.  Just like we -- even if we had a system where a doctor came to our house to deliver treatment, we don't want them to come every day, right, that would drive us insane.  We think of engagement that's something that the user should check every day or multiple times a day.  I think it should be much more geared towards where the need is.  When I'm doing well, everything's going well, the system should fade into the background, it should engage and deliver me useful content and actionable content when I need it.  I think we are kind of driven by the model of Twitter and Facebook are that you have to be always on and which I think is broken and we haven't addressed adequately enough. 

The other is kind of building upon Roy's points is that what cross dataset and insights can we get?  I think we have all these results that show predictive ability, but nothing that tries to bring datasets together, and I think that's something NIH could incentivize with the mobile [spelled phonetically].  Now there are multiple centers that have collected [unintelligible], I work a lot on the mobile sensing data, but others have worked on online data, but how do we bring them together and make sure they work, and I was -- and I'm interested in the data that Rebecca presented in the suicide risk data but [unintelligible] and some are reproducibility in Alaska.  I think we need a lot more of that. 

And the other thing I would say is, if we think about measuring technologies in the technology realm, we have a lot of measurement.  Right now, we can say in the mobile context we can reliably measure a lot of things without user burden and all sorts of things and then we have a huge slew of intervention that works on mobile devices continuously for the patient, but they are not connected, right.  Yes, you go and say you need eight hours [unintelligible] 10,000 steps or whatever, but they should be much more seamlessly integrated, and the measurements are not driving the intervention in a way.  For example, the things that we are measuring is not guiding CBT or ICBT and some of it is not even well-aligned, right, so yes, there are measurements of sleep and energy which could be triggered, but what kind of intervention is well aligned with the measurements, and I don't think we've thought about that enough, we just kind of say we do CBT, but sometimes we should explore it more broadly.  That's it.

Female Speaker:
Fantastic.  Thank you, Tanzeen.  And I think that one of the things I'm hearing in what you said is accounting for human complexity and comorbidity across psychiatric disorders when we're looking at, sort of focused on prediction in one area, whether that's taking that into account.  So, finally, Dr. Moren from CMU.

Louis-Phillppe Moren:
Yeah, thank you very much.  It is a great pleasure to be here.  I'm a computer scientist by training and probably by heart [unintelligible] was a psychiatrist or psychologist.  I just want to bring up the point that Brian made earlier.  We talked a lot about sharing data.  We didn't talk as much about sharing what we will call train model with these parameters, easy to replicate model from machine learning, and I think that's one of the issues that is being raised in machine learning probably 10 plus years ago and has been changing.  I don't see it's changing as quickly in healthcare.  People don't share not only the data that we some for the data, but also sharing the model. 

My heart isn't multimodal also and multimodal is the science of heterogeneous data.  This is including also temporal heterogeneity.  So, when we think of heterogeneous data, we talk a lot about the mobile phone, but I also want to bring up there are other modalities as well.  We have a lot of emphasis these days on telemedicine and their modality is retrieval communication, verbal, vocal, and visual, and I think we want to also think about these modalities. 

And I want to bring, because we discussed like machine learning, machine learning and deep learning have great advantages.  They can often learn from large amounts of data, deep neural architectures, but they have big challenges that they bring in compared to typical statistical approaches, interpretability is a big one, another one is fairness, and the third one is personalization.  So, it's much harder in a deep neural architecture to do interpretability of the result, why is it working.  Fairness is hard, like is it working because in my population there were more men than women, and suddenly, I have a bias, and that was also one of the points that were mentioned in the question. 

And finally, personalization, we talked about using the same system in many different hospitals, but we need to be able to quickly and efficiently personalize it and customize it.  And I will just end with the idea, which is not completely new but has not been emphasized yet so I want to bring it, the paradigm we talked about, human knowledge, we talked about machine learning, but we didn't talk as much about these hybrid human and machine learning where the learning happens together and that there are a mix and a collaboration.  So, these are new and interesting paradigm and that’s a new oncology discussion, and that's new because I brought it, but I wanted to bring it into the discussion.  Thank you.

Helen Egger:
Thank you so much, that's fantastic, and I would add to your comment around personalization, that it's not only the issue of personalization for different systems or health systems, it's also how you translate prediction done on the group level to prediction of individual trajectories and the individual pathways so it's reflected there.  I know we have some questions.  One of the questions I saw --

Sarah Lisanby:
I'm sorry, Helen, I just want to break in to do a time check.  We're going to have about five minutes before we go on break.

Helen Egger:
Right.  Right.

Sarah Lisanby:
And we have Susan Azrin from NIMH who's been monitoring our chatbox and [unintelligible] box and has cued up some questions to lead off the discussion.

Helen Egger:
Right.  Great. 

Sarah Lisanby:
Susan, you want to go ahead.

Susan Azrin:
Yes, thank you.  Can you hear me?

Helen Egger:
Yes.

Susan Azrin:
Great.  Thanks.  So, this question is from Holly Campbell Rosen to any of the panelists who'd like to jump in.  Could one imagine a tool where providers and patients get guidance and then in nearly real time interact with a predictive system in the clinic creating a closed loop that incorporates adverse effects, treatment preferences, and the like in which may adjust the decisions?

Greg Simon:
Yeah, this is Greg Simon, what I'd say is, yes.  I mean, you know, that's what we would hope to build in terms of guiding depression treatment and I think we will get there but as I said, there are some key things we need to develop in terms of the kind of better data we need.  You know, as I'm fond of saying, we talk a lot about omics, genomics, metabolomics proteomics, but we need to think more about preference-omics and value-omics in terms of how we incorporate what people want and how they feel into decision making.  I think that's a -- the technical challenge there is not that difficult we just need to prioritize it.

Peter Zandi:
I would just add to that, this is Peter, that to get to that vision I think Roy's comments are pretty prescient here that will require equal parts science and equal parts engineering to develop that kind of solution to be able to develop those predictive models and to develop the engineering to deliver those in a way that's useful to both the clinician and the provider. So, I think Roy's comments in this regard are really on target.

Helen Egger:
Right. 

Rebecca Rossom:
This is Rebecca.  I'll add that one way we found around that is to develop web-based decision support tools that to the user look like their part of Epic and they're integrated with Epic.  So, the things that the clinicians do in Epic inform the risk tools and that creates that loop.  But because it's web-based, it's not a change [unintelligible] quickly as guidelines change or other information changes.

Roy Perlis:
We call it sidecar approach and that's been our strategy as well.  I think that the difficulty is you got to remember most of psychiatry is primary care, right, and so while we as psychiatrists might sit around and look at the tool and think about this and a little bit of that, most of PCPs I talk to say just tell me what to prescribe and then make it easy, let me click a button, and that kind of integrates with Epic, I think is where the engineering piece comes in.  We got to make it, so we present recommendations they press a button.  It doesn't mean they shouldn't have the conversation, but in the real world it rarely happens so we have to acknowledge that.

Helen Egger:
Holly, do we have time for one more question, or should we --

Sarah Lisanby:
Yeah, Susan, I think we have time for one more question.

Susan Azrin:
Great.  This is from Daniel Handworker [spelled phonetically].  It's for Greg and Rebecca, the NHRN.  The question is, the NHRN data collection seems to focus on patients with risk of suicide and self-harm and this points to a general issue.  Are there data collection biases in these systems for patient populations that don't engage with the healthcare system as regularly?

Greg Simon:
Yeah, this is Greg, I'd say, you know, the data that are theoretically available to us include the entire data exhaustive healthcare, from every healthcare encounter received, you know, probably 25 million people.  We typically will create purpose-built datasets for a particular job.  So, the job of suicide risk prediction we harvest data specific to suicide risk prediction for people who are at elevated risk, but the native data are comprehensive. 

All that said, they reflect healthcare and as we know, especially regarding mental health problems, there are people who never seek care and their conditions may go completely unrecognized and those people may differentially come from traditionally underserved racial and ethnic minority groups for instance.  So, yes, there are biases in the data available to us reflecting the biases in access to health care in this country.

Sarah Lisanby:
Thank you.  Helen and Guillermo concluding remarks?

Helen Egger:
Guillermo, you want to go, speak first?

Guillermo Sapiro:
Let me just thank everybody and just maybe for the future discussion.  I know there were some questions about interpretability that Greg had some great comments.  I'm sure it will come back.  I just want to put that sometimes it's not about interpretability, but it's about transparency, and that's something that we also need to put forward.  And I want to thank the panel, I want to thank the opportunity that NIH gave us and wish everybody stay safe and continued learning this afternoon.

Helen Egger:
Yes, and I echo that and thank you so much, and I know already there have been things I've learned, and questions raised that I hadn't thought about.  So, thank you to each of our presenters and to our discussions.

Sarah Lisanby:
Great.  Well, thank you, Helen.  Thank you, Guillermo.  That concludes our first two morning sessions.  We're now going to take a 15-minute break.  We will resume right on time at 11:30 a.m. Eastern Standard Time, and please take this as an opportunity to stretch and get some food and coffee and fuel up because what the -- we have even more exciting discussions to come.  And those of you who did not have your questions answered, we will circle back to those during our one-hour general discussion session between 1:30 and 2:00 p.m. to 3:00 p.m. Eastern time.  So, we're going on break now and we'll resume at 11:30 a.m. Eastern time.  Thank you so much. 
In about three minutes. 

We're going to be resuming.  I just wanted to do an audio check for our next two moderators, Jen Goldsack and Jeffrey Girard [unintelligible].

Jen Goldsack:
Hi, Holly, this is Jen.

Sarah Lisanby:
Now, do we have Jeffrey Girard.

Jeffrey Girard:
Yeah.  How does this sound? 

Sarah Lisanby:
Perfect.  So, welcome back.  Again, I'm Sarah Hollingsworth Lisanby from the National Institute of Mental Health and welcome back to the second half of our virtual workshop on transforming the practice of mental health care.  I want to thank the participants from the morning sessions who really helped set the stage for where we would like to go in terms of our future vision for providing supplies and effective care for every person wherever they're receiving treatment and laying out the lessons learned from [unintelligible] currently underway to mobilize a large data set multimodal data to inform care. 

Now session three is where the rubber hits the road.  We're going to devote this session to learning -- to discussing how do we get from where we are now to that desired future state.  I'd like to thank our moderators, Jen Goldsack, who is the -- leads the strategy portfolio and operations at Health Mode and serves as the interim executive director at Digital Medicine Society [unintelligible], who is joined as co-moderator by Jeffrey Girard, who completed his postdoctoral research associate at Carnegie Mellon University and is now an assistant professor and Wright [spelled phonetically] faculty scholar at the University of Kansas.  I'd like to without further ado hand it over to Jen Goldsack to kick us off.

Jen Goldsack:
Thank you, Sarah, and Jeff, I think you're going to do the run-in for us today, right?

Jeffrey Girard:
Yeah, so we'll talk a little bit here.  So, in this third session, we will be discussing the intimidating but also the thrilling task of trying to turn these technological promises and possibilities into realities that are both scientifically grounded and clinically feasible.  In walking this path forward, it is going to present many challenges that we have to overcome.  Some are unique to specific subareas like genetics, or you know, different areas that each of us works in, and then the other ones are probably going to be common across many different areas, and so we're hoping to explore all of that.  And I'm excited to have a really fantastic panel here today who really represents a wide swath of areas from academia, medicine, industry, and government to discuss their thoughts and especially their kind of in-person experiences in, you know, kind of trying to blaze these trails forward. 

So, each speaker is going to make a brief presentation, we're hopping around three minutes, maybe a couple of minutes longer, but we'll try to keep you short, and then we'll have a broader discussion where again we're looking for those unique, as well as more common themes that emerge from this.  So, with that in mind, we'll start things off with Glen Coppersmith, who is the founder and CEO of Qntfy, and we'll be speaking about quantifying mental health signals from digital life.

Susan Lisanby:
And Glen, you may be muted.  Okay, I believe that Glen may have lost his audio connection.  Glen, can you hear us?  And I might make a suggestion --

Jen Goldsack:
In that case, why don't we skip ahead?  Brian, why don't you dive in with your presentation and then as Glen jumps back in, we will revisit his presentation.  So, thank you for advancing the slides, guys.  I really appreciate it.  So, Brian, if you want to briefly introduce yourself and then dive straight into your presentation on intermediate biomarkers.

Brian Pepin:
Sure, can you hear me?

Jen Goldsack:
Perfectly, thanks, Brian.

Brian Pepin:
Great.  Yeah, so my name is Brian Pepin.  I'm an entrepreneur working in the sort of software and data analytics around neuroscience therapeutics, specifically neuromodulation, TMS, DBS, SCS type therapeutics. 

I'm going to talk about -- I'm going to this quick example to just demonstrate a concept that I think is important for the panel, which is that if you're talking about developing therapeutics, maybe especially neuromodulation pharma therapeutics for psychiatric conditions or other kinds of brain disorders from depression to autism and sort of everything in between, behavioral endpoints are really tough to use because they're noisy, they vary from rater to rater, there's all kinds of other factors that go into play and everybody is sort of familiar that why those things are difficult. 

So, what we're seeing is a movement towards using a variety of metrics ranging from sort of imaging, EEG, FMRI type metrics, all the way down to sort of digital health activity type metrics to augment, I think initially, and then ultimately replace these behavioral biomarkers as endpoints or replace behavioral endpoints with these new kinds of more quantitative biomarker defined endpoints. 

So, this example is for Duchenne Muscular Dystrophy where the neurodegenerative disorder in children and the conventional endpoint for this is called a six-minute walking test, which has a lot of the behavioral elements to it.  The patient basically walks back and forth over this six-meter course, over six minutes, and then you record how many times they sort of walk back and forth.  So, there is -- even though there's an element of quantification here a lot of behavioral elements they get they sort of bleed in. 

So, it's stereotyped -- heavily stereotyped, patient motivation is known to play a very major factor, there's all the normal stuff around the patient has to, you know, come into the clinic and patients have movement disorders that can be difficult, and then as people mentioned in the last session, you would get this snapshot effect for a condition which realistically is fluctuating over time.

If you can advance to the next slide.  So, what some of our colleagues at Genentech Roche have done in this case is taken that six-minute walking test endpoint and replaced it with what they call a 95 percent stride velocity endpoint.  So, they use wearable watch-like device that's actually -- the children were on their ankles and they [unintelligible] sort of the normal course of daily life and they're sort of basically they would run around as they would normally essentially and we're able to text when they're walking and it measures this metric, which is called 95 percent stride velocity, so it's not the peak stride velocity, but it's just often.

There are various reasons why they're doing that, but the end result is what you have is continuous, so you're measuring a bigger picture of the patient's whole experience where you're quantifying it more accurately without these confounding things of like motivation and having to travel into the clinic, and what they found is that the effect size for these two tests is somewhat similar, 7 percent versus 8.5 percent, but the standard deviation for the stride velocity test is far lower.  So, it's 7.9 percent versus 20 percent, and what that translates to, in terms of therapeutic development, is an N of 14 in a study time of six months for the stride velocity endpoint versus an end of 112 and something like 12 months for the six-minute walking test endpoint. 

So, they've actually gone and started the approval process for this new endpoint in Europe and they're awaiting some final decisions and then they're also working through the FDA to get this.  So, they've actually taken even the next step of saying, “Okay, we have this sort of [unintelligible] or intermediate quantitative biomarker; we're actually going to go and get this.  We feel like we have enough evidence to go ahead and approve this as the primary endpoint for a study.”  And then once it's approved for the primary endpoint, you know, in this case this is a somewhat rare disease so it makes it much easier for them to recruit for clinical trials, because it can be a smaller trial and it can happen faster and so it can increase the speed at which they can iterate on drug development and make it more cost-efficient.  So, it's -- that's real money there in terms of those clinical trial numbers. 

So, that’s kind of the example I wanted to go over and then there are many, many different examples of this for how these kinds of markers can make a difference in autism, drug development or depression drug development but this one was just a really nice, kind of quantified example, so I wanted to share it.

Jen Goldsack:
Fantastic.  Thank you so much, Brian, and Glen do we have you back on the line? 

Glen Coppersmith:
Maybe?  Is everybody hearing me?

Jan Goldsack:
Yes, I heard that.

Glen Coppersmith:
All right.  I'm sorry about that.

Jan Goldsack:
No problem.  I'll let Jeff introduce you as these slides come up and just to note for everyone on the line, we'll run through all of our presentations, will then have a short discussion, and then we'll take questions.  So, if you have questions for our presenters as they go, please feel free to enter those into the chatbox, and with that, Jeff, I'll give it back to you, and thanks again, Brian.

Jeffrey Girard:
Yeah, thank you for being flexible here.  So, Glen, yeah, why don't you introduce yourself briefly, and then go through your presentation on quantifying mental health signals from digital life?

Glen Coppersmith:
Thanks for that and I apologize for the audio issues, it's always the tech guy that has the issues, right.  So, I'm Glen Coppersmith.  I'm a founder and CEO of Qntfy.  We're a tiny company that couldn't afford the vowels when we got started, but we're -- I've lived in this world in between computer science and psych my whole career and so this is sort of the outpouring of that.

Next slide, please.  So, what I want to briefly touch on here is what we're facing is a Wicked Problem in the Rittel and Weber sense, right?  This is not a binary decision as to what medication we should give someone, this is not a binary decision about what treatment we should give them, but it's actually more like trying to drive an autonomous car, we're not going to pick a point on the map and take a picture of the trajectory from here to there and use that as the sole source of navigation. 

This is actually much more like the continual feedback loops that we want.  So, as things are changing in the world, not that there have been many major changes in the world recently where things are pretty steady as far as I understand it, but as things are changing in everyday life and as people's personalization, their desire, what they need out of treatment is going to change. 

And so, one of the fundamental flaws in all of the applications of AI machine learning in this space is that we actually need to think about this in terms of the continual measures and so how do we actually get from here to there instead of the binary distinction.  And so constant -- this is -- the way you're solving problems are small compounding iterative improvements in measuring constantly, and that constant measurement is the piece that's missing. 

Next slide, please.  So, there is some work here.  We've done some work in the past looking at those lines about estimating suicide risk from data from outside the healthcare system and so it's wonderful to see a couple of people had a version of this plot were looking at the healthcare system interactions and other non-healthcare system data are present there, right, so if all we're talking about is using EHR data we're in some ways limited to looking at what happens within the healthcare interaction, and so this is one person over three years, and you'll see up top in red their health care system interactions, in blue, they are post on social media, and in gray their wearables or Fitbits. 

So, what this is -- what we've been looking at, we do see there's a whole -- there's a bunch of literature, not just the stuff that Qntfy has published, I'm looking at these things and there are great measures here and what we really need to be able to do is to get the humans to trust these measures.  And so this is the largest challenge that I've run into over the last seven years that I've been doing this, is that convincing the clinicians that this is a reasonable [unintelligible] and there is useful information here, you know, to fill in that whitespace has been the sort of fundamental challenge.  So, from my perspective, the tech is solved or solvable, the really hard part is the interaction with the humans.  So, with that, I'll step back.  Thank you very much.

Jeffrey Girard:
Oh, that's great.  Yeah, thank you for staying on time and for your thoughts on that.  I really like that metaphor of the autonomous vehicle and this being a manual process.  I think next, we will have John Torous, who's the director of the digital psychiatry division of Beth Israel Deaconess Medical Center, and he'll be talking about mental health app uptake and clinical integration.

John Torous:
Thank you, guys.  We'll go -- I'll jump right and maybe ask to go to the next slide.  I think that there's a lot of interest from our patients and clinicians in using mental health apps to kind of augment and extend care.  What's tricky though -- and I think the metrics we have today about uptake and use are not accurate. 

On the left, you can see a paper from 2016 that showed up star ratings and the number of downloads really means nothing about health app and quality.  On the right, you see a very interesting paper in 2019 looking at people in the public who download a mental health app.  How many people are actually still using it two weeks later?  The answer is about four percent.  The one app group that abuts the trend is pure support in gray, those apps have about 18 percent engagement, so again, I think we have to be honest about who are we reaching.  It's not just that it's on the App Store, that's good, it's kind of what is happening and how do we pick it? 

We'll go to the next slide.  So, I think, as we kind of -- as Glen just told us, there really are some issues around trusting these things.  This is, again, an article screenshot I took from March 9, 2020, that says is mental health app threatened to sue security researchers over a bug report, so I think we do kind of still have some security issues in this that are kind of holding back trust. 

On the right is a paper we published in NPJ Digital Medicine where we actually read app stores and looked at what is the evidence that these apps are claiming or descriptions?  It was about 64 percent were kind -- 44 percent were using scientific language about their app.  We track that evidence down, it really was about 1.4 percent.  So, again, there are a lot of reasons for people to be a little bit skeptical. 

Next slide.  So, we’ve done some work with the American Psychiatric Association, not recommending any app, not endorsing the app, but kind of giving people evidence-based principles and guidelines, a framework to ask questions about to think about making a smart choice in these apps and you can see it really starts with privacy and security and then clinical foundation, which is evident, and then thinking about engagement and then kind of thinking about the therapeutic goal.

Recently, New York City -- actually, before its pandemic -- could have used this framework to make its own app library.  So, we're not saying which apps you should pick, saying look, just ask the right questions, make smart decisions, all the things we've heard this morning let's put this evidence to use and framework so that people can use to kind of use the right app and have good experiences. 

We'll go to the next slide.  So, our team actually does a lot of work on digital phenotyping.  That's actually our main thing about predicting relapse in psychosis with smartphone sensors and that research is very exciting.  In part, we can actually use elements of that today and we'll call it a digital clinic.  So, you can see the kind of, on that 1, 2, 3, 4, 5, those are different digital data streams that we can collect in research, but also from someone's step count whose has passive data showing what call and text logs could look like.  We don't know what people are saying or the content, but we can see kind of what days people were more socially active on calls and text.  We can have people tagged their environment number three, we can have people take real-time survey number four, we can begin to do kind of simple inference in predicting their anxiety today, and tell us about, say, your mood tomorrow. 

What we do actually to kind of take this outside of research, which again, we're doing to put it into clinical use is, we work with a patient in our clinic and we say, “You know, what is the clinical question?  What data would be useful for us to learn about?”  And you can kind of see on that flat icon diagram, we can say, “Hey, are we trying to learn about anxiety and sleep; let's track those for the next visit and add a survey.  Are we trying to learn about mood and mobility patterns?  Let's track that.”  So, for each patient, we can kind of customize what we can do and what we can collect. 

We'll go to the next slide.  This slide isn't meant to overwhelm people, and the app platform we use that's open source, we have no commercial venture.  We offer people ways to learn and talk about managed prevent, and it's overwhelming.  There's a lot of different options you can use and for each patient we don't want to use all the things, we want to customize what people can do, and I think that's where the clinical integration really comes in.  Not everyone needs -- every sensor needs kind of everything we can do.  We can pick the right ones for the right people. 

So, we'll go to the next slide.  In part again, this is a kind of schematic of how we kind of run the digital clinic.  On the left, we still put the patient and clinician together, supporting a therapeutic bond is the most important.  And the technology part kind of as an addition, we've actually created a new team member that we call a digital navigator, just like in radiology, you have a radiology tech and in pathology, you have a pathology tech.  It's probably time for mental health to have our version of kind of a technologist and we call it digital navigator really because there's a lot of patient interactions, there's a lot of supporting therapeutic alliance, and then that schematic on the right, you can kind of see different things that we do in our clinic that the digital navigator supports in this role. 

And I'll go one last slide.  We also think it's important to be teaching our patients with serious mental illness how to use technology.  Again, we do a lot of work in early psychosis, in schizophrenia, and bipolar disorder and patients may increasingly have phones.  They've been given them; their social worker gave them a phone.  They don't really think of their phone as a tool that can promote their recovery.  And so, hosting groups that kind of teach people how to use their smartphones as meaningful tools and help and give them the skills and competencies really can be a very powerful intervention just on its own.  So, we have this manual app called Doors digital outcomes and recovery services that's an open source, it's where you can look at it, we're always looking for people kind of helping us do that as well.  So, with that, I think I am out of slides.  Thank you.

Jen Goldsack:
John that was terrific.  Thank you so much.  So, next, we'll hear from Akane Sano.  She's going to talk to us about sort of the barriers and challenges as we try and move towards data-driven mental health care.  Akane, if you could introduce yourself to our guests and then dive right in.  Thanks so much.

Akane Sano:
Hello, everyone.  My name is Akane Sano.  I am the chief of professor at Rice University.  I am an engineer and computer scientist working on developing machine learning -- personalized machine learning model, especially for mental health in sleep using multimodal physiological and behavioral data. 

So, yeah, since this morning, we have a lot of discussion about data, and I just want to briefly talk about the challenges as engineering in computer scientist to deal with this kind of dataset for mental health research.  And we have various kinds of the different datasets from different sources in critical data, some digital data, smartphone, computer usage, sensors, internet services.  We also have self-reported.

We have various kinds of challenges, for example, is it better to collect more data?  What is the burden of users and the investigators who are [unintelligible] to collect data?  And we have a lot of noise, especially the data we are working on is coming from their daily life setting, so we have some missing data and noises, and how can we deal with it, and as we already discussed, the user and patient adherence is another issue. 

Also, physiologically behaviorally, the patient-user status has been changing over time.  We have to think about how we can deal with time-varying status.  We also have a lot of individual differences so some data -- so the labels, clinical labels or self-reported data include very skewed information.  That means the distribution might not be, you know [unintelligible].  We have another skewed data around high scores, low scores.   And also, if you think about, for example, predicting relapses, suicidal events, those events are very rare and compared to the non-rare event, it's very hard to detect and catch those rare events. 

Next, please.  So, if we think about machine learning models and data science approaches for mental healthcare, there is a lot of ongoing research.  For example, we want to classify, or we want to diagnose mental health disorders, we also want to detect or predict patient symptoms, risks or even not for the patient.  But also, we are interested in how we can [unintelligible] how we can detect people who might have a high risk in mental health disorders.

Not only detection, prediction, diagnosis, classification, but also, we are interested in selecting treatment intervention for the right users at the right timing.  And a lot of ongoing research has been trying to develop machine learning models, and also, recently, one of the things we have discussed in a previous session is personalization.  Personalization is the one of the key elements, especially because everyone is very different, and our situation context trait physiology is changing over time. 

Then not only personalization, but also, we have to think about how to think about the subtypes of groups of people because everyone is different.  But it's difficult to get a lot of data from each individual.  So, if we think about the subtype people into subgroups, then we might be able to get a better model.  Another challenge is dynamics.  As I already mentioned, people's physiology, behavior, or preference conditions have been changing so our machine learning model has to be also dynamic; however, a lot of models we have developed so far have been still very static. 

Next, please.  So, if we think about what the ideal data we can collect and we can use for our research or treatment, the data has to be of course, like, long-term, and we really want to have data from diverse populations of patients and users in terms of demographics and geographic regions.  Again, some of the clinical events can be very rare, so we want to have more data of those clinical real events.  And since many of us are interested in integrating that multimodal data source, so this data format has to be consistent and standardized. 

So, these data will definitely help understanding individual in subtype differences.  We also need to think about how to build some environment of the building and testing dynamic and personalized machine learning models to capture time-varying user symptoms and physiology and behavior. 

Next, please.  Then another thing we have to think about is, so a lot of us, like, in data, [unintelligible] researchers have been developing a lot of models, but how much precision and reliability and interpretability that we really need in building clinically useful and trustful tools.  So, this is not about accuracy, precision, but also what kind of outcomes should we provide through this kind of model?  What kind of tools and information are really useful for clinicians, patients, and other stakeholders? 

Deep learning has been very successful even in the healthcare setting; however, a lot of people know that we sacrifice some interpretability in [unintelligible], so how can we take the interpretability versus precision.  And also, another thing I want to emphasize here is we really need to build some research environment to integrate comparative data set from multiple studies and multiple sites and also some research environment to accelerate developing and testing the effectiveness and safety of machine learning models in our findings.

Some of the healthcare research, for example, ICU data, like mimic database in the physio net, the physiological dataset has been public, and this has been successful to increase the number of research activities.  So, in like mental health research, I am really hoping that other kinds of things will be happening, and mental health will be affiliating these kinds of activities.  Thank you.

Jeffrey Girard:
Okay, so now, I think we will move on to Matt Switzler and James Hickman, who were both coming from Epic.  And we're excited to be able to kind of hear from them directly about, you know, working on the electronic medical record.

Matt Switzler:
All right.  Can you guys hear me?

Female Speaker:
Yes.

Matt Switzler:
All right.  Hi, everybody.  My name's Matt Switzler.  I'm a software developer on the behavioral health team here at Epic.  At Epic, we provide electronic medical records, as well as lots of other systems that health systems can use to help their operations.  Lots of the other presenters have talked about the types of data that you can get kind of in the traditional model of clinical decision support that is typically implemented by a human that needs to have a robust research behind it to have the appropriate interventions. 

And we've seen lots of good outcomes for these systems in data-rich environments, but one of the fundamental struggles that we've been hitting in the mental health space is that unlike when patients have physical medicine problems, patients deteriorate, for physical medicine conditions, those -- they tend to engage more with the healthcare system when you can collect more data points about them.  Whereas if somebody has a mental health problem, they tend to engage less with the healthcare system as well as many other parts of their life, which gives us less data to work with.  So, even if you are sending out things like questionnaires and that sort of thing, there's no promise that you can get that back. 

That being said, even with some of these limitations when we've been able to see some pretty good outcomes in areas like where do you see no show rates, determining readmission risk, as well as the early detection of physical medicine diseases.  For example, we've done a lot of cool work around sepsis, so the infrastructure is in place.  And it's making a different kind of today, and we have active work ongoing to help augment that with AI and machine learning and we're glad that we can be at a forum like this to prove this in the behavioral space.  I'm going to pass it over to James, and we will want to advance next slide, and he can talk about what we're doing to augment these systems with AI in Epic. 

James Hickman:
Thanks, Matt.  So, as Matt mentioned, our focus of experience at Epic is really the application of machine learning at the point of care and that includes the acute care setting, of course, the ambulatory clinic and then increasingly, outside of the health system, [inaudible] for population health. 

The early [unintelligible] industry being machine learning within healthcare has focused on helping and improving workflows that already took advantage of scoring tools with more accurate machine learning predictions.  But I think in the last couple of years we do see a little bit of a feedback loop emerging as health systems are starting to look at leveraging machine learning for their informatics to enable new workflows.  And some recent examples of that are, you know centralized monitoring in the hospital setting, and also a large number of specialty-specific predictions that are putting more power in the hands of a primary care provider, others frontline providers prior to the console and to the more limited specialty resources, be that nephrology or perhaps behavioral health. 

There are some key concepts that we found are important to ensure the success of these efforts.  A lot has echoed already this morning, but first there are providing tools that are transparent and, of course, that's important from a quality and an ethical standpoint, but it's also a key to success for increasing the actionability if you will of these tools.  So, in our experience, this has been a little bit less of a concern over the kind of arms race of the type of algorithm that we're choosing and it's much more round user-centered design in exactly how we present that prediction to a particular user.  The second is putting the prediction in clinical workflow.  So, catching that patient when we have them with us for a visit, perhaps also taking advantage of brand-new clinical documentation and rapidly, you know, revising a patient's risk for a particular event in real-time. 

And then last, I certainly agree that we must put a premium on the experience of providers that we're asking to use these tools.  So, there's really an emerging science in designing both UI in models that aren't solely optimizing towards accuracy, but really the impact on clinical workflow.  And to that end, certainly, I'll support some of the statements made earlier about the difference between something that varies [spelled phonetically] under the curve as a method of evaluating models, and particularly emphasizing some of the threshold metrics like positive predictive value or negative predictive value. 

So, our development at Epic and the opportunities that we have to work with health systems that are clients of Epic has really focused on making the deployment as technically easy as possible, and while maintaining intellectual property rights.  We certainly welcome conversation today or afterward about the role that Matt and I and others at Epic as developers can play in motivating and supporting the spread and sharing of bottles. 

We understand that there are data challenges, we understand that there are also intellectual property questions and sort of validation on different populations and things like that, happy to participate in all of those.  So, I really appreciate the opportunity to listen in and participate with so many leaders in this space.  Thank you.

Jen Goldsack:
Matt and James, thank you so much for joining us today.  And let's move on to our final presentation of this session before we dive into our discussion.  We welcome Donelle Johnson, to talk a little bit about the importance of the workforce, keeping pace with this era of technology.  Donelle, if you'd like to introduce yourself and then dive in, that would be great.  Thanks so much.

Donelle Johnson:
Sure.  Thank you.  I'm Commander Donelle Johnson.  I work in the National Substance Health -- Mental Health and Substance Use Policy Laboratory at SAMHSA.  And so, I'm just going to talk to you about some of the challenges and opportunities with workforce as far as transforming the practice of mental health care. 

You can go to the next slide, please.  Okay.  So, given SAMHSA charged to reduce the impact of mental illness and substance misuse in America's communities ensuring that individuals with mental health have access to evidence-based, high-quality care is a critical focus for us.  A major factor in achieving this goal is addressing the training and education needs of our practitioners.  To facilitate the achievement of this goal, data on the current numbers and trends within practitioners is imperative. 

So, we have the mental and substance use disorder practitioner data program, which provides comprehensive data and analysis on individuals who comprise the prevention and treatment field to address mental and substance use disorders.  So, the goal of the program is to provide valid data on existing practitioners and usable information to SAMHSA on which to make policy and planning decisions. 

For instance, knowing how many and the variation of practitioners can help us determine the needs and areas to focus on.  For instance, we know traditionally minorities are underrepresented in this in behavioral practitioners, so we have the Minority Fellowship Program, which supports the development of behavioral health practitioners, and specifically it aims to increase the presence, knowledge, and skilled practitioners available to serve racial and ethnic minority populations. 

By increasing the number of culturally competent professionals in the workforce, the program seeks to reduce health disparities and improve behavioral healthcare outcomes for underserved minority communities.  The program also seeks to encourage racial and ethnic minorities to join the behavioral health workforce.  The relative scarcity of professionals who are from culturally and linguistically diverse backgrounds constitutes a workforce issue that contributes to the current disparities in quality of care and access to behavioral health treatment. 

We also have the technology transfer centers, and so, these centers helped to develop and strengthen the specialized behavioral healthcare and primary health care workforce that provides prevention, treatment, and recovery support services for substance use and mental health disorders.  The mission is to help people and organizations to incorporate effective evidence-based practices into substance use disorder in mental health prevention, treatment and recovery services.  Together, the network serves all 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, Guam, American Samoa, Palau, Marshall Islands, Micronesia, and the Mariner islands. 

Now through the centers, we provide technical assistance to all providers, not just SAMHSA grantees.  So, within the centers and the training programs that I mentioned, there are opportunities to educate and train providers on using technology, which can address some of the barriers.  For instance, it can help to increase providers knowledge and comfort with using the technology as well as them being able to gain an understanding of the patient's knowledge and comfort in using that technology, also helped them to acquire the knowledge about privacy and reimbursement policies around the use of technology, especially with things like telemedicine and all the rules around how you can get reimbursed for that. 

While there may be barriers such as cost and infrastructure to using technology both on the provider end as well as the patient end, training and educating the workforce to effectively use technology and practice is an opportunity for improving access to care.  So, that’s about it, but I just think we can't forget the importance of helping the workforce to get on board with this so that they can move it forward.  So, thank you.

Jen Goldsack:
Terrific, Donnell, thank you so much.  And just to orient folks on the line for how we're going to handle the rest of the session.  Jeff and I will lead some quickfire questions for our panelists and then we'll throw it open for those of you on the line to raise questions directly to our experts.  And I'll kick us off, Jeff, if I can, and dive in.  Glen, I have a question directly for you. 

During the prep work, we had some discussion about sort of what the limiting factors are for introducing these sorts of digital measures into this sort of practice of mental health care.  You and I were on the same page that we don't believe sort of technology is the limiting factor.  I wonder if you could talk to us about what you believe some of the other considerations are beyond the technology itself.  What's holding up the implementation of these digital tools?

Glen Coopersmith:
Yeah, thank you for that.  I agree.  I think technology is an assault on the grand scheme of things here and the larger barriers are cultural ones and their ethical ones.  And so, it's not -- I may not say what you're going to expect me to say, is that the patients want more analysis in different things.  They see what they're getting for targeted advertisement and that that works stupidly well and then they turn around and none of that data is being or little of that data is being used to help them.  You know, it captures a lot of the same data, but it's not being used to help them understand their well-being in helping to make them better. 

So, this has -- I think this all comes down to a single point that we can look at and it's a very interesting ethical discussion we've had with a number of colleagues in that this is a very interesting spot we're putting privacy against the duty of care and so if we have technology that's capable, for example, of predicting suicide risk from all the public data that we can find about a person or even more interestingly, people that have opted into it, if we could predict the suicide risk it's going to outstrip the capabilities within the healthcare system. 

This is now putting the privacy of the patient who -- they've given it away.  They're willing for us to analyze them.  They're willing to be analyzed, putting that against the duty of care for a system that is, there's a real cost for not engaging with these sorts of things, and so this is the -- so often it's a one-sided thing or the privacy of the patient is paramount and I agree, but there's also this other side of that coin where there is a cost in terms of lives if we're not going to use the technology available to us. 

And so that all comes down to this, that is the point of this like large ethical, everyone screams that there are ethical issues here and there might be, but at the same time, like, we have to look at both sides of this ethical issue, not just that the technology is somehow scary.

Jen Goldsack:
Glen, that was terrific, and a really sort of provocative teaser, I think, for our next session on ethics.  So, strong work on the segue there.  Thank you, Glen.  Akane, I wonder if we could pose the next question to you.  You spoke a lot of having the right data in order to do this well.  What else do we need to build a sort of clinically useful and trustworthy tools in mental health?

Akane Sano:
Hi, I think in addition to the data we need different, you know, some system -- some ideal online system for researchers to test with built-in test models like in real-time.  And also, the other important thing is, you know, we also need to communicate closely with clinicians and users in what they are looking for and what kind of user experience they have.  So, I'm an engineering computer scientist, but I also closely work with clinicians.  We also need to integrate this, like working with various stakeholders in getting input into our system and models.

Jen Goldsack:
Fantastic.  Thank you, Akane.  Yeah, you got a couple of questions for our colleagues [unintelligible].

Jeffrey Girard:
I do, yeah.  So, I'll -- in the next one at Brian, we had talked about quite a bit in the whole workshop today about this idea that, you know, these new technological measures could help us actually inform more tailored therapeutic regimens.  So, I'm hoping you can speak a little bit to, how does the rubber actually meet the road on that?  How do we integrate these measures into, you know, tailoring the therapies that we actually provide?

Brian Shiner:
Thanks, Jeff.  So, kind of two -- I'll go two elements here.  So, one is you have to have the right end goal in mind, and by that, I mean, ultimately, you're trying to get an FDA approved therapy with this sort of intermediate biomarker, maybe a companion [spelled phonetically] diagnostic, or you're getting an FDA approved biomarker linking some physiological output to some integral trade or some clinical diagnosis.  So -- [sneezes] excuse me. 

So, that implies a lot of sharing of your data with the FDA, building to a big consensus, these things generally aren't patentable.  The link between a physical signal and a sort of physical outcome aren't typically patentable, so you have to kind of have that framework in mind.  You know, if you're a company, this biomarker is not going to be your competitive advantage, right?  You can't kind of have it both ways and have a biomarker to your competitive advantage and also something that there is broad scientific consensus in the community such that it could get FDA approval and actually be used as an endpoint. 

Those are sort of mutually not compatible.  You kind of -- similar if you're an academic group.  You can't, I think, have a very big impact if you're just sort of -- you know, creating sort of datasets, which you don't consent in a way that they can ever leave your -- you know, the identified way -- ever even leave your university servers, and then, you know, your group can continue to publish off those papers, but it's very hard to replicate.  Very hard to integrate that with other work that people are doing, and you never build this broad consensus that this is a real biomarker.  So, I think starting from that philosophical place, which is, I think, there's still a lot of people that are all across the map there is a good beginning. 

And then practically, to get these things actually integrated into studies, basically just realize they don't come for free.  Like, the infrastructure to do this, the patient software interfaces, the platforms, the actual hardware devices, none of that is free.  And so, if you're biotech or a pharma company or MedTech company and people are doing this, you just have to budget for that and think, you know, "Okay, if I have this and I can run a clinical study in six months instead of 12 months, and with 14 patients instead of 50 patients, how much is that saving me?  How much should I -- what's the uncertainty there?  How much can I budget for this?"  [inaudible] and integrating that and, you know, finding the right partners.  And there's -- for every indication across psychiatric disorders and everything else, there are companies that partner with pharma and biotech to do these sorts of things. 

On the academic side, I think there's some stuff that NIH can do in terms of properly incentivizing people to do things like have kind of open source or access layers to at least identify data so that datasets from different groups are a little more inner operable.  Kind of proactively, you know -- when they're thinking about funding studies, making sure that there's plans for, you know, this kind of layer of quantitative biomarker stuff to be done with [inaudible] that are using common platforms and that the data infrastructure is there so that data can kind of live beyond the study and not just be sort of aggregated [inaudible] not just kind of dumped into a bunch of file folders and, you know, posted in the NDA.  So, I think that's --

Jeffrey Girard:
That's great, Brian.  Thank you.  Yeah, so really trying to find ways to align, kind of, the incentive structures that exist for both companies and researchers with what we need to move the field forward.  I think that's a really critical point I'm glad you raised.  For the sake of time, sorry to cut you off.  I think I could move on to the next question for John.  It was great to have somebody who is really on the front lines trying to do this stuff as a clinician, and so I was hoping John could speak a little bit to, what is the actual experience like, both for providers but also critically for patients, to actually try to use these types of digital tools?  You know, what are the good and troublesome aspects of that, in your perspective?

John Torous:
I think that's a good phrasing, there's good aspects and there's troubling.  I think that sometimes there's frustration because they'll be very interesting scientific papers, reports done about using these technologies, but when you kind of read the research you go, "Well, that was done in a way that really doesn't match how we're going to deploy it in clinical settings."  Or again, if you pay everyone $100, they get an extra therapist, they get a yacht, and a jet, and a mental health app and they feel better, that doesn't kind of always answer the question.  So, kind of having studies that may be rigorous, but again, because they're rigorous, we know as you kind of add active controls, these app interventions, that they're not going to look as strong. 

But I think having evidence that's kind of realistic and applicable and generalizable is so important because people do want to pick this up.  Clinicians want to offer the best services for tools, patients want to get the best outcomes, so I think we're all on the same page.  But I think sometimes there's a little bit of -- the hopes kind of get ahead of the evidence.  That's I think also designing these things about how to fit into a clinical workflow. 

In part, we did kind of design our digital clinic [unintelligible] the workflow to kind of make sure we could use apps and technology, but I think it's something we don't love to think about, kind of messy implementation details.  And I think even very simple algorithms and sensors can go a long way when they're kind of thought about that implementation and those results can actually be generalizable, that our clinician colleagues -- again, myself included when we're doing this -- can learn from.  

Jeffrey Girard:
That's great, thank you.  Jen, do we have one more kind of specific question before we go to the broad ones?

Jennifer Goldsack:
Yes.  So, for Donelle -- and John, thank you.  That was really terrific.  I liked how you spoke about the evidence needing to sort of keep pace with expectations.  Donelle, your workforce presentation was really excellent, and I wonder if you could go into a little bit more detail about how a more representative workforce can help better integrate sort of behavioral health into broader public health?  Sort of traditional clinical care and research.  And also, how can we think about the role of technology in facilitating that process?

Donelle Johnson:
Okay, sure, thanks.  So, a more representative workforce can be used to inform public health, traditional healthcare, and research.  These providers are able to bring real world experiences into all those fields, and a representative workforce is not just necessarily limited to demographics, but also specializations in the different workforces.  So, for example, I forgot which provider -- I mean, which presenter it was earlier that mentioned the need for more child mental health specialties. 

So, again, they can help to inform that and address these areas that need more specialists and areas like that.  They can definitely inform the areas that you mentioned.  As far as technology goes, technology can absolutely help to better connect individuals and providers, keeping in mind we do need to address some of the barriers.  But with the advance of the mobile technologies, healthcare services can now be provided in new ways.  Look at it now like, you know, with the COVID-19 going on, we're using a lot of technology.  Providers are using it.  Some of them may not have been informed or comfortable using it but being forced to use it now. 

So, I think with the representative workforce, if they're knowledgeable about technology, they can gauge their patients' comfort and they can know what to utilize.  Maybe everyone, especially the older population, may not use smart phones, but it could just be a simple telephone call that the providers would be able to use.  So, they need to be aware and trained on the technology.

Jeffrey Girard:
Great.

John Torous:
And I'll just, quickly, I think pure specialists, it's a really great opportunity to kind of have them help in this kind of technology implantation role.  We have a new need where we have pure specialists who really can do amazing work in this space.

Jeffrey Girard:
Yeah, that's a very interesting, yeah, resource to kind of broaden that workforce even further.  I'd forgotten, we have one more question we haven't talked -- a specific question for Matt and James.  You know, we had basically talked about, both in this session and in previous ones, about, you know, sort of a need for some standardization in terms of data and just the myriad challenges that come from working with data on the large scale, especially when it's unstructured.  And so, something like Epic and, you know, the electronic health records systems possibly presents a nice streamlined way of obtaining that type of standardization.  So, I'm hoping you can kind of talk a bit about what you guys at Epic are doing or what you personally see as being, you know, important steps on a path forward to that standardization.

Matt Switzler:
Yeah, I think there's a couple things, a couple different ways we approach this at Epic.  I think probably the broad one is we do participate in groups like care quality, HL7, we follow the fire standard, and we're often commenting on any ONC proposed rules around inner operability so that even, you know, you might be an inside Epic system, but if you travel to a certain hospital we want to make sure that we can exchange data with those groups. 

And we don't focus solely on medical information, we continue to push them on behavioral data, psychological data, as well as social factors to make sure that they're considering that.  From an implementation perspective, we in general push all of our implementations to track data discretely.  Kind of a rule of thumb we use is, if you're not going to get the data backup, don't bother putting it in.  So, we try to think about some of those reporting components at the very onset before people are even in the system.  Those are kind of the big things I'm thinking about.  James, do you have anything to add to that?

James Hickman:
Yeah, a couple things.  Kind of thinking a little bit about comments earlier from Greg and Roy, we have an opportunity to work with a lot of health systems that are maybe deploying their own models or, perhaps, at times taking advantage of a model trained elsewhere.  And our approach has been to guide those organizations to document things as uniformly as possible.  Overarching goals, absolutely, to reduce the technical overhead of sharing and spreading models.  We would absolutely love to see further uptake of shared ontologies, like going for labs for example, but in the interim, working pretty closely with health systems to make sure that, as they're managing and transforming data, be it in a data warehouse or some other system, that they are taking approaches that reduce the overhead of sharing the model.  Our kind of strategy has been, overall, to make that validation and understanding of how a model travels as easy as possible, because there's plenty of complexity on the operational side of evaluating how that model fits in to clinical workflows.  Thanks.

Jennifer Goldsack:
Matt, James, that's terrific.  We are very spoiled, I think.  Our hosts have been curating questions from the audience.  Let's break now and see what questions folks on the line might have for this sessions' experts.

Sarah Lisanby:
So, we have Jenni Pacheco from NIMH.  Jenni, are you -- do you have some questions for the group?

Jenni Pacheco:
Hi, yeah.  I have one that's more of a comment, but for anyone else that's on the line, if you have some questions you can use the chat box or the Q and A section to post any questions.  The comment I have is just following up on [unintelligible] point about the need for resources to train and test models.  They are saying that there's an interesting example outside of the clinical domain for training deep R.L. models in the gym by open A.I.  They provide a structured way to interface with data and test models and compare models across different environments.  So, the suggestion here is that there might be the potential to use a similar concept to protect patient privacy, but still provide an interface to researchers who want to build new models.  I'm not sure if any of our presenters would have a comment or anything to respond to that.

John Torous:
There are some ongoing efforts.  I think York has one and University of Maryland, at least, are working on these things that are like data enclaves and this is a slightly different take on that.  But how do you bring the researcher, vet the researcher, and bring the researcher to the data, rather than bring the data to the researcher?  There's a lot of good paradigms there, and I'm happy to chat with anyone who is interested in that particular thing.

Jeffrey Girard:
I'll also say, if -- as we're waiting for other questions to come in, we have a few other ones, too, that we have prepared just in case.  So, one, I think, kind of common theme that emerged was this idea of trust and buy-in, both from the providers and the patients.  And, you know, I think one of the things we had talked about was maybe trying to do a better job of aligning the types of metrics that, you know, doctors know and expect with maybe what is more common in the machine learning and predictive spaces.  But what else do we need to do to make sure that, you know, trust and buy-in are optimized here and that we're really being responsive and kind of proactive about this?  And so, this is a question for anybody that wants to chime in.

Male Speaker:
I'll be quick here.  The users will tell you.  This is at least in the app world, right?  If you look at retention over time.  So, John had that great slide about how quickly people are falling off, two weeks later they're not using most of these apps.  That's a good indicator that that's not going to work over the long term to induce behavioral change.  And so, we've done -- we have our data helps, there's an opt-in study.  We have 6,000 users that have been willing to be part of that for four years, and so part of that is we're listening with the [unintelligible] community or engaging with them.  Not a focus group, but like continuously looking at retention rates.  And so, like, we should take a page from the app world.  They've done this really well because they're heavily incentivized to do it.

John Torous:
This is John.  I think the trust issue is very important, because without trust you don't have health or mental health.  I think the bar that we're asking for trust is that if people give data about their mental health it's used for their mental health, it's not going to be shared, traded, bartered.  This is not an extremely high bar; we can go higher.  But I think that if you kind of look at -- unfortunately if you go to what consumers are using today, what patients and doctors are using today, it's such, kind of, an abysmal setting for trust and privacy that I think even small efforts -- and we don't have to fix all of it -- would kind of offer exponential increases in trust, which would lead to better engagement, to better quality outcomes.  So, I think this is something we could fix today.

Jennifer Goldsack:
And John, if I could riff off that for just a quick second, as it packs the last question for our session.  How important are things like, sort of, health and tech literacy in getting that right and getting that sort of communication and transparency, which is crucial to trust?  Making sure that it's sort of equitable across the board?

John Torous:
We're kind of running our digital health training groups for patients with psychosis, we're actually very impressed with how quickly people can learn to use smart phones and they get excited about it.  A lot of times, no one has actually sat down and said, "Did you know that your phone is probably tracking steps automatically?"  Or, "Did you know here's how you can reach [unintelligible] medications automatically?" 

These are things that people's phones can do.  They're actually very excited and then they kind of want to build new skills and competencies, but I think, as we kind of move more digitally, first we're talking about here making sure people aren't digitally excluded because they don't have, again, the skills the competency and the confidence is probably of utmost importance.  Again, it's not as fancy as some other things, but I think it's kind of the real work that we have to do to make sure that we help everyone.

Jennifer Goldsack:
Fantastic.  And so, it looks like we're nearly at time.  I want to say thank you to Jeff for being a brilliant co-moderator for this session.  Also, to Glenn, to Connie, to Brian, John, Donelle, Matt, and James for being absolutely terrific, sort of, discussants for this session.  You know, as we think about the sort of -- the barriers and the challenges and the opportunities, what's clear to me after this conversation is that it's too simple to sort of point the finger at, you know, it's the tech that's the challenge or it's the sort of the human users or implementers that are holding things up. 

It's really clear that, you know, there's issues really at the interface of the technology and of all of the humans involved.  And really what shone through is the need to build trust, using evidence, and to do all of that built with sort of the end user, whether that's the clinician, whether that's the patient, or whether it's the person charged with sort of implementing this technology, keeping them front of mind as we build that trust and evidence.  So, again, thank you so much to all of our experts.  This was terrific.  I learned a great deal and thank you for everyone for hanging on the line and for another great session, and I'll turn it back to our NIH -- I think I got that right -- hosts.

Sarah Lisanby:
Yes, thank you very much, Jenn and Jeff, for a stimulating session, which is a very nice segue into our last session prior to discussion, which will be on ethics, privacy, and special populations.  Specifically, child/adolescent as well as geriatric patients as well.  So, I'd like to thank our co-moderators for this session, Dr. Dilip Jeste, who is a senior associate dean for healthy aging and senior care, and distinguished professor of psychiatry and the Levi Memorial Chair in aging, and director of the Stein Institute for research on aging at UCSD.  I'd also like to thank Dr. Jeremy Veenstra-Vanderweele, who is the [unintelligible] professor for the implementation of science for child/adolescent mental health at Columbia, and the director of the division of child and adolescent technology.  And together, they are going to lead us through our discussion with these really outstanding panelists.  So, let me hand it over to Dilip.

Dilip Jeste:
Thank you.  Good afternoon, although it is still morning in California.  Thank you, Holly, for inviting me to join this really important workshop.  So, this last session on ethics, privacy, and special populations is especially important in the field of mental health.  There are some special challenges in the field of psychology.  One is that most of our outcome majors are subjective, unlike, say, radiology or oncology.  Secondly, most of our patients and their families don't have access to state-of-the-art technology.  And last but not least, our consumers are often vulnerable to being taken advantage of or when being abused by others.  So, these issues are both privacy and security, as well as ethics, become critical.  And I'm just delighted that we have a terrific panel of experts in various medical [spelled phonetically] areas, and we'll be hearing from them.  Let me turn it over to my co-moderator, Jeremy.

Jeremy Veenstra-Vanderweele:
Thank you, Dilip.  And I should say, it's been a really exciting workshop thus far.  It feels like there's been a thread of ethical discussion running through from the very beginning.  We are going to do our best to, at least for the first half, stay out of the way of our eminent group of panelists.  Really a terrific group.  We're going to provide only quite brief introductions and let them speak for themselves.  And then in the second half, we will do our best to ask at least somewhat provocative questions before opening things up for a general discussion.  Dilip, I think you'll introduce the first few?

Dilip Jeste:
Yes.  So, I'm delighted to start with my colleague Camille Nebeker, who is associate professor of family medicine, public health, and psychiatry at University of California, San Diego.  She's also director of ReCODE Health Research Center for Optimal/Digital Ethics.  Camille?

Camille Nebeker:
Thank you, Dilip, thanks so much, and it's a pleasure.  I wanted to start off, first of all, just to do a brief primer on what we mean when we talk about ethical, legal, and social implications.  And I've just done this quickly so we can be grounded in at least how I'm thinking about ethical dimensions of research in clinical practice, and especially with respect to digital tools that are being used in mental health as well as general population health.

So, under the ethical domain, we think primarily of the Belmont principles of beneficence, respect, and justice, and is the consent accessible?  Do the people that are -- that we're asking to participate in studies or to use a certain type of clinical intervention, are they -- do they have the health technology, data technology, technology literacy to be able to use whatever we're asking them to use? 

Oftentimes, the technologies can pick up signals from people that are nearby, the participant or the patient, so we also need to think more about, what are the bystander rights with respect to using digital tools in health?  Beneficence -- the principle of beneficence is really about looking at the risks in relation to benefits.  How do we identify these risks, quantify those risks, and then balance those in relation to potential benefits that could come to, not only the individual, but to the society at large?  And justice is really about who is sharing the burden of any type of research we are pushing out?  And are they like the people that are most in need of what's developed, what knowledge has developed?  And so, access is a real critical part of that.

When we move into the regulatory domain, the focus is really on the law, which is not an area of specialization for me.  But the federal regulations for research have been updated; do those updates really map to the types of activities that are happening in the digital health spectrum?  We look at conflict of interest, the new privacy rules like GDPR and the California CCPR regulation.

And then social impact.  We have to think about, what is the downstream effect of the things that we're deploying now?  How have we engaged our communities in helping to determine whether or not the things that we're planning to do are right for them?  And then this newer area, returning information, returning value to people that are participating in our study is becoming an area that is really important to unpack.  Next slide, please.

So, think about the characteristics.  These are also challenges of digital research.  As we all know, we're very connected.  The smart environment -- we're using these tools more frequently in research.  We can monitor and intervene with people 24/7 in real-time.  We can capture just [inaudible] amounts of granular data.  [inaudible] data anonymity anymore.  We cannot even really be confident that deidentification is going to work, and so, you know, the other thing that's happened in the past six, seven years is that these tools have enabled access to people who are now doing studies on themselves. 

So, a growing area of citizen science is happening.  We also have, you know, our technology industries involved in biomedical research as well as non-profits.  So, whereas we have regulations in research, for those of us in academic environments [inaudible] to accept federal funding, those regulations do not map to everyone who is involved in this ecosystem.  And so, we have unregulated, regulated, untrained, trained, and people in this sector that are very thoughtful about ethical dimensions, and some that have no idea what they should be thinking about.  And so, this is a really important area to be thoughtful of at the very front end of anything that's being developed to make sure that we don't experience those downstream effects.  Next slide, please.

And so, in collaboration with Dr. John Torous and Rebecca Ellis, we have developed a framework that is created through an iterative research process, with ethical principles at the core.  And what this is looking at is helping people who are developing technology, choosing technologies to use in research, clinicians that are trying to make a decision about what tool might be useful for their patients; but to think through the privacy implications, access and usability, data management, and risks and benefits.  And so, what we've done is created, using this as our framework, a checklist that helps people to think through, "What do I need to think about before making a decision to use a certain product?"  And so, I can talk more about that off the line and share this checklist and framework with those of you that are interested.  And I'll pass it back to Dr. Jeste.  Thank you.

Dilip Jeste:
Thank you, Camille.  Our next speaker is Dr. Ken Duckworth, who is medical director of NAMI, the National Alliance on Mental Illness, the largest national grassroots organization in mental health.  Ken?

Ken Duckworth:
Hello, Dilip, and hello everybody.  I've learned a lot so far, and Dilip, I remember fondly a presentation at the NAMI convention some years back.  We love our researchers.  I just want to let you know, NAMI's had quite a couple of weeks.  We're experiencing record calls at the help line and, you know, the state and local affiliates are being, I would say, overwhelmed with questions about mental health and COVID-19.  One thing I want to offer from a few points of perspective is, when I talk to people at NAMI, and I do listen to them a lot, one of the problems we face as a group of individuals trying to make things better is they have a time [unintelligible] oversold by second generation [unintelligible] and psychotics that don't have side effects, for example, or for antidepressants that don't cause dry mouth.  Each has been revelated to demonstrate different and profound problems for the population.

So, one of the things, you know, I'm interested in thinking about is how to frame this for people that it doesn't become another set of what people would say are disappointments after they've been told that new treatments aren't better -- materially better.  And I acknowledge that some new treatments are better for some people, but I just want you to know that's kind of heart of the framework with people.  They're observing that they're taking the same medications or the mechanisms of actions as their family members might have two decades ago, and access to hospitals is hard, access to prisons is easy. 

So, this is the -- we have a loving and wonderful community, but a lot of people feel disaffected from the overall outcome of the fragmented, chaotic, patchwork mental health system.  And so, how do you do co-creation with people with this experience?  I heard several people talk about this idea, giving people feedback.  How do you help people self-manage, ultimately, the idea that symptoms are useful for their own recovery journey?  How do you do co-creation so that the people who are the end users of the process experience this as something that is collaborative and ultimately for them? 

I do want to mention, on a different note, that NAMI did a project with Google putting up the PHQ-9 on mobile phones.  And while I’m not at liberty to discuss the numbers of people who took the PHQ-9 on mobile phones, the number is enormous, and I feel like that's an example of technology that didn't require a lot of complexity, and we had large numbers of people come to the NAMI webpage as a result.  In the British Medical Journal, I was invited to do a debate with a gentleman from England who had a much better accent than I did who was against this entire democratization of screening tools. 

And so, it's just interesting to me.  His concerns were largely about privacy -- this is [unintelligible].  Privacy, and the lack of interpretation of these tools.  At NAMI, we would trust people, you know, you have to empower people.  They're already living with their symptoms, and we feel, like blood pressure, people should know their PHQ-9 score, to give one very simple example of how you can get technology out.  So, until we have a uniform application of things like PHQ-9 in people's lexicon, like blood pressure 120 over 80, I think we are still going to face an uphill battle.

I was impressed by the critiques on privacy and the lack of trust of big data and institutions to guard individuals.  Google has made it really clear that they had no access to anybody's information of any kind when they listed the PHQ-9.  So, that was just an interesting, small experiment that we did.  A lot of people took that and that didn't require a tremendous amount of collaborative co-creation because it's a tool that already existed.  So, I'll stop there.  And thank you all for all that you're doing, and I appreciate, you know, what I'm learning today.

Dilip Jeste:
Thank you, Ken.  The next speaker is a colleague of Jeremy.  Randy Auerbach is associate professor of psychiatry at Columbia.  He's also director of translational research on effective disorders and suicide lab at Columbia.  Randy?

Randy Auerbach:
Thank you for having me.  So, in full disclosure, I'm not an ethicist by training, but I think deeply about these issues as it relates to trying to translate what we're doing in the lab to try to understand how we could actually detect it in real-time.  And I think, perhaps most notably since my lab focuses on depression and suicide, we're really trying to figure out what and when.  So, what are the things that actually -- excuse me -- are important to look at?  And most notably, when do these things occur? 

And -- sorry, could you advance?  And so, in addition to using kind of multi-modal neural imaging, we've also used passive sensor data collection through smart phones.  And, as many of the speakers have explained in prior presentations, you know, this passive sensor technology, particularly as it relates to adolescents, provides untold access of what is going on in the contexts of adolescents' everyday life.  And you could use a number of different sensors to kind of target very, very specific questions, and what we've done in our lab is really trying to characterize it as well as possible whether it be, you know, typically or neurally, and then trying to see if we could then kind of detect it and take it and translate it outside the lab. 

And what's so promising about this in the context of adolescent samples -- if you could advance the slide.  Yeah, is that what you could do is then, it's not just kind of real-time assessment of these potential behaviors that may map on to depression or suicidal behaviors, but what it affords you is these just-in-time interventions.  And what I'm meaning by this and what's been noted, kind of more colloquial throughout, is not necessarily intervening as in treatment, but intervening as an interceding when individuals may actually need services.  So, perhaps the most [unintelligible] case is in the context of suicidal behaviors whereby you're able to intervene as somebody navigates into a risk state in the context of depression, and maybe interceding earlier in their disease course or earlier in the context of relapse when intervention may be more successful.  So, it's really within this context and the context of adolescent populations that I've been thinking deeply about how to really protect our participants in ongoing studies throughout the lab.  Next slide, please.

So, there are a wide range of issues at play here.  The first being as kind of the access to private data, and so, when parents and teens come into the lab we actually explain in kind of painstaking detail as to kind of what we're collecting.  And this what kind of circumnavigates the pictures that they're taking on their phone, the websites that they're trafficking, the words that they're writing in their phone, the locations and places that they're going to be, and we worked with our IRB to kind of figure out the appropriate language. 

Remember, you have to kind of boil it down into a vernacular that is both kind of palatable to both adult populations but also adolescent populations, particularly adult populations where English may not be their native tongue.  And so, what we've kind of done to ensure that there is kind of accessibility of this information is that we include a rudimentary quiz.  You know, are they aware of the types of information that we're collecting, how long we're collecting, and things of that nature. 

But you also see, in terms of looking at adolescents and adults, particularly parents, is that there's this disconnect in privacy.  Generally speaking, adolescents are much more liberal in the type of information that they might be inclined to give away and parents tend to be more conservative.  But it's also the case that, actually, parents aren't aware that they're currently giving a lot of this information away in terms of what they're doing on their phones and what apps they may or may not have downloaded and things of that nature.  And so, it often begets just a broader conversation of privacy.  And I think a point that was touched on by Camille much earlier is that, you know, can we guarantee privacy?  And the answer is kind of categorically no.  You know, we can do our best, we apply cutting edge encryption standards, we're updating those, you know, typically month by month as things improve.  But for our purposes, we just try to acknowledge the limits of what we can and can't do.

Something that has been touched upon, I think, you know, briefly in probably each of the sections today is our inability in certain cases to do real-time data analysis.  Now there are certainly studies that are able to do this, but when our prediction problem actually isn't solved -- so, for example, Dr. Ryan did this predictions -- these AUC findings at .92, but if we don't necessarily know the variables of interest going in but we have a thought of what might be of interest going in, you know, we really can't analyze. 

And as I think some of the more -- the engineers in today's presentation noted, it's actually fitting these data actually extremely complex, and so we both have kind of a what.  You know, what are the variables that are of most interest?  And then the how in terms of analyzing these data is extremely complex.  And so, what we've done is we've really tried to just be forthright with [unintelligible] population, meaning that we view this as kind of a three step phase -- at least three steps, the first being is that we're trying to identify high value targets across our different features within passive sensor data.  Once we have identified some promising markers, we then seek to test these markers.  And then as a third phase, you know, trying to validate it, which is not totally different than a lot of the RCT's that came before us. 

And then, I think the last point from an ethical perspective, and I guess -- this is not meant to be an exhaustive list, but something that is kind of active in the context of things that we do in the day-to-day is that there are these unanticipated real-time findings.  An example that we could draw on from my own data is that we do a lot of natural language processing, and we are doing natural language processing, depending on the technique that you're employing, you find very interesting things when you analyze the words that people type in their phones. 

So, you identify if things are unwanted sexual contact, molestation, sexual assault, and things of that nature.  And particularly as it relates to teenagers, you don't necessarily always have the commensurate amount of information to necessarily file reports, and I think that we're grappling to try to figure out, what is the best way to handle these types of data in terms of protecting our patient population, protecting from perpetration of things like unwanted sexual contact.  And again, what I encourage, and this is not the [unintelligible], this is that, you know, we consider these kind of active, ongoing consultations with our IRB to make sure that we're handling these issues in the most judicious and safe way as possible.  Thank you.

Jeremy Veenstra-Vanderweele:
Thank you, Randy.  These are difficult things to wrestle with.  It's my pleasure to introduce Paresh Patel, who is an associate professor at the University of Michigan.  We heard earlier from Peter Zandi about the National Network of Depression Centers, and Paresh is going to share with us some of the data use and management issues that have arisen regarding ethical issues within the mood outcomes program of the NNDC.  Take it away, Paresh.

Paresh Patel:
Thank you, Jeremy.  And thank you to Holly and NIH for their foresight in organizing this workshop.  As Peter introduced in session two, the NNDC has a clinical and a research mission.  One component of the latter is an awful large a rich registry of mood outcomes data in order to develop better treatment strategies for mood disorders.  It's achieving this by cooperation among the 26 and growing academic centers and non-academic mental health treatment programs.  We've already heard some of the co-considerations raised by other presenters in discussions this morning, and here I just want to highlight some of them that are relevant to the NNDC dataset and highlight on the slide.

It goes without saying that we're working with sensitive personal data and pains are taken to maintain privacy in the abstraction and storage of that data.  The NNDC research data is considered secondary because it's derived from a primary dataset developed for providing just-in-time clinical decision support to clinicians, but it really challenges the ethical principal of respect and transparency because patients are not explicitly told about the deidentified abstraction, and are not aware other than in that sort of boilerplate language that most [unintelligible] have for general consent of treatment, that deidentified findings of their records will become part of a national registry.  We can talk about this some more in the discussion, but this is an issue for all the [unintelligible] registries, many of which also collect PHI. 

One of the biggest challenges to privacy is really sort of an embarrassment to our success, which is increasingly sophisticated analytic platforms, which could potentially reidentify patients to the linking of separate datasets.  Luckily, some of the more fruitful discoveries come from a combination of datasets, but this is a privacy risk that's difficult to anticipate and arguably not universally considered by IRB's as they try to negotiate this complex and rapidly evolving landscape.  Another ethical consideration with federated data is data integrity and accuracy.  This is a principle that GDPR pushes. 

People are mobile, and so federated clinical data may contain disparate fragments of a same individuals' data and the identified dataset that's very challenging.  In our fragmented national healthcare system, that's even particularly challenging because we don't have universal patient identifiers, so communizing that data is really difficult and arguably almost impossible with deidentified datasets, but it introduces a lot of noise.

The ethical principle of justice is always challenging.  Obviously, the ethical imperative to be attentive whose data we're leveraging, but most academic centers have developed data stewardship committees that hold the intellectual property in their clinical population.  This was brought up earlier, and it raises the question that, as clinical margins to support research continue to shrink, what barriers might this intellectual property focus raise?  Currently, in our experience, it hasn't been huge, but it does come up.  A significant ethical concern with research on deidentified data is the beneficence for actionable findings.  Deidentification make some research datasets possible, such as the NNDC research [inaudible], but it also creates a barrier to feedback for actionable findings.  And it has been mentioned before, research populations often suffer from underrepresentation of disadvantaged populations are in collection reflects the underlying population, but it is biased by representation at our major academic centers. 

Finally, I want to kind of raise a provocative quandary, which is not listed here but it's highlighted by the COVID epidemic.  All of us in the clinical space are witness to what unprecedented relaxing of the privacy rules in order to address the acute threat that, by the latest estimates, could take the upwards of 100,000 lives in the U.S.  And yet, these regulations present challenges to the kind of data we need to address the scourge of, for example, suicide death has been taking up to 50,000 lives annually for many years.  I'm not advocating that we abrogate research related patient protections, but we really need to think about developing technologies and policies to balance the ethics of protecting patients and the ethical imperative, if you will, of quickly arriving at some more effective treatments, especially when it involves large-scale, multi-site, federated data.  Next slide.

So, here, I just tried to think about some potential solutions, and these are not necessarily -- I mean, not necessarily novel, and things that maybe people are already doing, like masking/coding sensitive information.  Something that we, as a country, have not really come to grips with is a universal patient identifier, but it would really, potentially, significantly help towards integrity and accuracy in federated data.  We need better mechanisms to re-contact patients of deidentified datasets for transparency and opt-out options and actionable findings, something like an Honest Broker system, which some institutions do deploy, but in the NNDC dataset that would be very challenging.  There are some data management strategies that people are developing, some sophisticated algorithms for trying to manage PHI risks in distributed networks, with the idea that we need ways to make sure that integrated data doesn't accidentally reidentify subjects.  As I mentioned, we need to promote models to balance the patients versus institutional rewards.  Maybe increase incentives and decrease barriers for data sharing while maintaining safeguards for data security and patient confidentiality.  And finally, figuring out incentives to balance the challenges recruiting underrepresented and disadvantaged populations.  Thank you.

Jeremy Veenstra-Vanderweele:
Great.  Thank you, Paresh, and thank you for keeping right to time.  I'm conscious that we want to make sure that we do keep our next speakers to the four minutes each.  It is my pleasure to introduce Luke Chang, assistance professor at Dartmouth, director of the computational social effective neuroscience laboratory, grappling with these questions in real-time. 

Luke Chang:
Thanks.  It's been an absolute pleasure to hear from all these different perspectives about the issues -- the promise and also the issues of working also across different types of systems from private companies to healthcare networks and also from researchers.  So, I think communal outlined a really nice ethical framework on how we should think about the different ethical principles for protecting privacy, but then also for maximizing benefits to our research participants.  But I want to basically push on making a case for secondary data analysis, so these are basically when you might work on a dataset for a different question than it was originally developed for and what you had consent from participants from, and for what the funding might have been for.  And there's a low of benefits for this, and that, you know, there's been a lot of conversations about this for the last few years, but I just want to highlight -- some of the highlights. 

So, one of them is that, by doing -- enable the new research questions that weren't thought of in the beginning, and you can go back to old datasets and try to compare them or generate new things, and that -- there's a lot of promise in that.  It also allows us to enhance scientific reproducibility and minimizing false positives by basically increasing our power and sensitivity to [unintelligible] effects that might be there.  And also, I think it's been mentioned by multiple people throughout the day, it facilitates development of new research methods and I think it might have been -- I can't remember exactly who mentioned it this morning, but someone had mentioned some, like, image net and some of these datasets that have been publicly available for computer scientists to develop new models. 

And a lot of the innovations that have come have been the result of, like, having better computation methods but also having much better data.  And so, I think that's really important, and it's not always -- those people are not always doing that work or not always on the teams of -- that might have access to the data.  It also reduces the costs of doing science if you can pool it, and then another kind of underappreciated thing, I think, is it protects these really valuable scientific resources that are generated from these datasets and an enormous amount of money that's poured into it.  So, if a hard drive fails or a database gets corrupted or a researcher isn’t around anymore to share it, having redundant copies that are put out and stored around the world I think is really important.  Next slide.

So, these are like the benefits of why we want to do secondary data analysis, but I think it raises even more of ethical issues.  So, first, there's clear issues of privacy and, especially with HIPAA restricting anything that's identifiable.  So, this could be face information if you're trying to get -- detect, like, emotions, or signatures from voice about how someone's feeling that might put them at more greater risk for self-harm.  These are -- you can't really share these openly, easily.  GPS location, for example, is another one where -- and device identification.  So, if you want to use, like, R.F. I.D.'s, radio frequency I.D.'s, to see, like, how many people are these people coming into contact with?  Like, this type of stuff is impossible to share, even though it's being collected by all of these past incentive [spelled phonetically] studies. 

Another thing is respect for persons.  So, people give informed consent for the primary analysis, but there's all these questions that might come up and, in theory, they should be able to choose if they want their data used for these other things, but that's not always possible.  They might not be able to be contacted, and also the people doing the secondary analyses might not be in touch with the ability to contact them.  And there's been some legal cases about this that's raised a bunch of issues as well. 

And then on the other side, you know, participants are spending all their time to participate in this, and we want to make sure that everything they do, they get the most benefit from even if it wasn't things that were initially intended from the original study.  And then also, I think there's also -- it can speak to justice where increasing the generalizability of our research findings and being able to train models that are much more robust and generalizable will make it accessible, hopefully, to many more populations who might not have been initially involved in some of the original data collection efforts. 

Okay, so other issues that I think are ethical but also just really open questions, especially if datasets are kind of being collected across healthcare networks and private companies and also research institutes with federal funding.  So, who owns the data?  Is it the participants themselves, is it the researchers who collected the data, the research institutes, or corporations?  Funding agencies who might have done it, or even the journals where it gets published?  Like, these are things that are being grappled in many different fields.  Sharing data is really important, but it's also incredibly costly. 

So, it's time consuming, the amount of storage, especially when you start scaling up, is really large and the bandwidth to transfer datasets across people.  There's some datasets in the [unintelligible] community which I'm more a part of where you actually have to mail hard drives because it's too big to be able to send over online.  And also, I think an underappreciated thing is because there's these -- you might get funding to collect data for a certain amount of time, but there's indefinite time horizons of what these responsibilities are on the people who've collected data share.  So, this might persist for like decades, potentially, if you have a really interesting dataset.  And that kind of ties into data stewardship. 

So, often, you know, researchers are doing it, they have many different hats on that they're doing, and being administrator to sharing the data is a very arduous task and verifying that they have HIPAA compliance and all these other things, so that's expensive.  There's also -- might be legal expertise in negotiating license agreements and intellectual property and things like that.  This is like, to some universities, charging handling fees upwards of thousands of dollars just to, like, handle these kinds of things. 

Also, this could go on forever depending who collected the data, and also, who should shoulder this responsibility?  Should it be the people who collected the data, should it be the persons themselves for funding?  And then the last thing I want to say is, when you scale up the data sharing, there's lots of difficulties that emerge.  And one of them that's been touched on, but I don't think it's been explicitly stated throughout the day, is the idea of data standardization.  So, we can have all these data sets and maybe you can just download them, but if you don't know how to aggregate them or what all these variables are or who matches to who, that can be really onerous.  Thanks.

Jeremy Veenstra-Vanderweele:
Thank you, Luke.  And I'm privileged to introduce our last speaker who really has real-time experience managing found amounts of data.  Acting director of the division of scientific programs for All of Us -- the All of Us research program.  Holly Garriock has her training in genetics and pharmacogenetics, which is one domain where we've actually seen these sorts of database decision making tools.  [inaudible] for better or worse.

Holly Garriock:
Thank you, Jeremy.  I'm going to do a quick check.  Can you hear me okay?

Jeremy Veenstra-Vanderweele:
Yep, sounds great.

Holly Garriock:
Yeah, great.  All right, so, I'm also not a card-carrying ethicist, but I do have our social/legal implications, or our ELSI specialist and lead for the program, she'll be on the call as well.  So, if anything -- any deep dive questions come in, I will call on her to help back me up.  So, I hope here to be in line, largely, with a lot of the framework that Camille put forward in that first presentation, and I think what I'll be saying also addresses a lot of the points that Paresh made as well.  So, hopefully this will be a nice flow of presentations from framework down to how we're actually doing it in this large national program. 

So, hopefully I clearly -- my slides got a little confused here, but hopefully everyone knows the All of Us research program is a large program aiming to enroll a million or more participants in the United States that are willing to donate their electronic health records, self-report survey questions, a fair amount of blood and urine, and a physical measurements assessment, as well as other mobile tech information over a longitudinal period of time. 

We -- this slide that I'm showing you here is how we try to balance a lot of our ethical/legal tensions that we have as we are implementing the program and partnering with our participants.  The first one here is the appropriate reach.  So, we have a large focus on diversity, and this goes beyond the kind of standard race/ethnicity definition of diversity.  We have about nine industries of diversity that we use to account for underrepresented -- you know, historically underrepresented in biomedical research. 

And so, we want to make sure that we're not going to far, but we are going into the right places, both geographically as well as legally.  So, enrolling the participants that we are able to enroll by law and groups that we're including and purposely not including at this point in time for ethical reasons.  And then demographically, you know, as you know, I'm sure, the ages for consent vary by state, and so we're making sure that we are 18 and up in most states and 19 and up in some of the states where we have to be specific there.

This is a national implementation of that program, and so we're not able to bring participants into our office the way that Randy described and really sit down with them or their parents to talk about informed consent.  So, this is an electronic consent for all of the participants that enroll.  We need to make sure that the consent is at the correct reading level, we aim for fifth grade reading level or lower, and we need to make sure that they are engaging digitally in an appropriate way, and that our program really facilitates that.  So, we've done a lot of work with sage bionetworks to try to make sure that happens.  We include some formative assessments to try to promote understanding of the consent as they're going through it, and we also have a video support as well as onsite clinic support for those participants that need it.

For inclusivity, we are really trying to build trust in protecting our participants and our data.  We largely do this through community engagement and ambassadors across the country.  For maximizing benefit here, really want to focus on our concept around having data broadly available from community scientists to the individuals on this call, the academic ivory towers, and making sure that everybody has access to the data.  We're building the infrastructure and hoping everyone will come into the data. 

We do this using a cloud-based platform for broad access to try to get around all of the datasets and hard drive being passed across the country so all the researchers come to the cloud, analyze the data, use all the tools and everything there that's possible.  In order to minimize harm, we do a lot of data transformation to try to prevent reidentification of the data.  We have a couple tiers of access for data, we have a lot of policies and governance groups around -- access to the data, access to the biospecimens, and access to the participants, and we have a lot of other policies that are in place to try to avoid stigmatization of data as well.

A lot more to say, I'm realizing I'm at my four minutes.  I'm happy to leave the rest for questions.  My next slide was around privacy, so it kind of shows the governance that we have in place at the program around privacy and security, and then the following slide is just around [unintelligible] information to participants, which is one of our core values and we really see as a return of value to participants.  Sorry, Jeremy, went over.

Jeremy Veenstra-Vanderweele:
Thank you, Holly.  No, appreciate you tracking your own time and holding to it, it's very helpful.  I think recognizing the time, Dilip and I are going to go back and forth with some fairly rapid-fire questions but open them up to the overall group to chime in, and then also monitor questions in the background.  Dilip, you want to ask the first question?

Dilip Jeste:
Thank you, Jeremy.  Again, thank you all speakers, great talks.  So, one theme it seemed was common to several of the presentations was giving feedback to the patients about research information we have collected.  And it's not as simple a question as it may appear, because we are collecting information that is of research value, but does it have clinical value?  And it may wind up going in opposite direction in the sense we may be interfering with the clinical management of the patient being done by somebody else.  So, I just want to open this up for different speakers, what is your recommendation about giving feedback of the research information to the study participants?

Randy Auerbach:
This is Randy.  I know it's an excellent question, and it's something that we grapple with.  And briefly, I think it really depends on the clinical population with which you're working.  You know, in our own data there's a real negative to have false positives for a suicide attempt.  In so much as, you know, although, yes, ostensibly, you will keep this person safe, but, no, this person will likely not continue to participate in your work or clinically continue to use this tool. 

And I think that Dr. Torous and several others intimated about kind of technology is great.  Uptake is easy, but not incredibly easy.  But dismissal of these tools when they're not working effectively is rapid.  And I think that we need time, depending on the clinical question, to develop the right tools to address the right question. 

Camille Nebeker:
I'd like to build on that, Randy.  I think that this is such an important area.  Return of information is not a traditional mindset of academics.  And when it's clinically actionable, that's one area that, you know, in my, in my years as an IRB member we've struggled with.  We've come up with protocols.  How do you return information in a way that is responsible, that engages the clinician, and that doesn't insert upset? 

And we're working with [unintelligible] on a longitudinal study with older adults.  Everyone is over 65 years of age.  And it's a five-year study where we're collecting physical data, mental health data, cognitive data, and also, they're using wearable sensors.  So, we're getting so much information about them.  And what we did is we wanted to know what is it you want back?  How do you want to be engaged in the study for five years? 

And this gets at short-term and long-term engagement.  So, when we spoke to them and we conducted focus groups, we actually work closely with the people that are in the study as you do.  And what we were saying to them is that, you know, we have this information.  What we're trying to do is predict whether or not you have a higher likelihood of falling down six months from now, or that your, you know, cognitive abilities are declining.  They said to us, "We want to know what you're learning.  And we want to know not only how we compare with people that are like us, but how I compare to myself over time.  " As Dilip said, some of this is clinically relevant and some of it is research data. 

So, what we did, before we started to think about how do we return individual level data, we are looking at how do we return group level information.  So, we took the publications that we'd put in peer reviewed literature, and we broke that information down.  And we -- you know, similar to what you'd see in a press.  And we shared that with them.  They said, "That's not quite good enough.  We don't understand some of these words.  We don't know how that related to us."

And then we took that information and synthesized it even more and created infographics.  And we took it back to the community, and we said, "Well, what do you think about presenting information at a group level this way?"  Got more feedback about words, about layout, about colors, and now we're on our third iteration.  And I think engaging the community is critical for learning what we need to give back. 

And it's basically tailoring it to them.  Some want to know about actionable.  Some people don't want to know if it's not actionable.  So, we really have to be thoughtful.  I think, you know, when I was working with the All of Us research program, it was returning assay information.  Well, what if he find out that you live in an environment that is -- you're exposed to lead.  You're exposed to other toxins.  You take that information to your physician; what can you do about it?  And what -- and then you get the question between the participant and their clinician.  How did you get that information?  What are you going to do with it?  So, there's a big area of work that needs to be done in this area.  So, I'll stop there. 

Jeremy Veenstra-Vanderweele:
Yeah.  Thank you.  I wonder, Holly, if you could comment on this.  I know return of information is a big issue in All of Us.  If you could maybe talk about how you have the conversation with participants in advance about what they want to get back and not get back? 

Holly Garriock:
Yeah, thank you, I would love to.  So, our participants are partners.  They're involved in many parts of our governance across the program and especially around return of information.  And we really engage them as actively as possible. 

So, our participants have the ability to choose which results they want to get back.  So, we have a responsible return of genomic results.  And there are different types of genomic results that you can give back to a participant.  For each type of those type of results -- medically actionable, traits, and ancestry are kind of the large three buckets that we work with.  They are able to choose which kind of grouping they want to inform.  So, we have a series of informing root loops within the consent process for return of information that the participants are able to engage in. 

Not only do we have that kind of active online support, we also have a genomic counseling resource that we've put a lot of resources into.  And both we have, you know, actual people on the phones that are there able to support them regardless of their genetic result.  As well as the, you know, virtual experience that they're able to have. 

Jeremy Veenstra-Vanderweele:
Thank you. 

Holly Garriock:
But would you add anything there?  Oh, Sorry. 

Jeremy Veenstra-Vanderweele:
I don't know if she has access at the moment.  I want to jump over to Paresh or Luke.  Noting that there's also a question about whether people whose data is included in a large de-identified data set have some rights or should receive some information back if there's meaningful findings that come out of analysis there?  Or if that simply belongs to the health system or someone else?  I wonder if you could comment on that. 

Paresh Patel:
Sure, this is Paresh.  That is a really challenging question.  I think, you know, it's easy for us to conceptualize what happens at the one end of the extreme.  Where you have maybe the data from one individual that has a lot of scientific impact.  I think Henrietta Lacks would be an example of that.  And then the other end of the spectrum you have say Scandinavian healthcare systems, where when you query the data set you can literally query the entire population.  Somewhere in between is where you have, you know, maybe five percent of the population really contributing to knowledge that helps the other 95 percent. 

Is there a responsibility of you know returning some value to the individuals who participated?  I think it's an unanswered question.  And where that cut-off, if you will, occurs is really -- I don't know.  That's really something that we need help from ethicists on.  And I think it's something that we often don't even think about with de-identified data.  We sort of assumed, okay, we de-identified.  The patient is no longer involved.  And really, we have to think about there's some responsibility back to that patient. 

Jeremy Veenstra-Vanderweele:
Yeah, I think you can think about some of the population level data, whole populations like Iceland who've participated in large research studies, and what that obligation is.  It's a little different when you're part of an electronic medical record in a particular system. 

We could pivot a little bit, and I'm thinking about some of the earlier conversations today.  If we could maybe have a discussion of when you move from research to clinical implementation.  And I think sometimes about the example of pharmacogenetics where we had what is essentially prospective retrospective, because genetics comes first, prediction of outcome in antidepressants and other treatment that rapidly turned into tests that were available for clinicians to use in everyday clinical care. 

My experience as a clinician, mostly seeing patients and consultation, was it sometimes led clinicians to turn off their existing knowledge and defer to the results of the test.  So, I work mostly with kids with autism.  I'd see kids who are on a medicine that has no evidence whatsoever in autism but had a green light on the pharmacogenetic test result.  And I think it's one of the things we have to keep in mind that just having a prediction doesn't mean that knowledge of the prediction will improve clinical care.  And I wonder if folks could wrestle with that question a little bit? 

Dilip Jeste:
You know, along that line actually it's continuing the same line of questioning.  You can -- I mean you have talked about the fact that the most important stakeholder for technology are the end users, the people who -- the consumers.  So, the question that I am asking is how do you see it from the perspective of the end users? 

Jeremy Veenstra-Vanderweele:
You know, it's a great question, and I think there's a lot we don't know.  I think the more the people we're trying to serve are engaged in the process.  I love your earlier comment.  You know, that when you're designing something, you're asking them in real time, "What would be useful for you?"

Because life is full of surprises.  And some people may or may not want to schedule their appointments on their phone.  They may be too paranoid for that.  But some people will want that.  I think it's going to differ by person.  Anticipation of relapse will be important for some people, but I think ways to stay connected to others for people that are isolated, pre-viral isolation, you know due to their vulnerabilities, will be helpful for others. 

So, I think this is a very interesting space.  I think there is opportunity.  And I salute those of you that are asking this question because I think this is the linchpin of this making a difference in people's lives.  So, as you're asking the question, you're doing the thing that I'm really hopeful that you would be doing.  So, I applaud your thinking. 

Dilip Jeste:
Thank you.  Anybody else has comment on that? 

Sarah Lisanby:
I'm just going to make a general remark that we do have some other ethical experts who are not speakers but are participating online.  We also have some Q and A from the chat box and Q and A box, which David Blightman is queuing up for us.  When you're ready, just call in David, and then we'll bring that forward. 

Randy Auerbach:
What is very common, I just wanted to say, and I think that this was highlighted very nicely in each one of the panels today, is that we kind of fall into this binary either/or situation.  But I actually really do feel, and I think this was so well underscored in each panel, that it has to always be this kind of shared agreement between the clinician and the patient in terms of what tools are appropriate that is guided by kind of clinical insight and the patient's insight and relative experience.  I don't think that these tools will ever exist in a vacuum, particularly as you think about moving from high income to low income countries, you know, from youth to adult populations who embrace technology in very different ways.  So, I guess I just wanted to maybe make that comment. 

Paresh Patel:
If I can just briefly comment on Jeremy's original question is how do we get that information back to the clinician and get the clinicians to use it?  And that is, as Jeremy you're probably are very familiar with this, but those of us in clinical psychiatry have very -- you know, there's very good data to establish that measurement based care is better than a standard or treatment as usual.  And despite the strong evidence, it's very challenging to get clinicians to change that framework.  Some data suggests it takes 17 years or something on that order to get people to change their practice.  It's really like literally training a whole new group of people with the new skillset, but we've still got to keep doing it.  I think it's really critical. 

Jeremy Veenstra-Vanderweele:
Yeah, absolutely. 

Holly Garriock:
In All of Us, I was going to say for when we're giving information value or results back to participants especially for the medically actionable genomic results, we try to make the reports as useful and responsibly interpreted for the providers as well.  The participant has to bring the report to their provider.  We don't provide information to their provider.  So, we try to draw that line pretty starkly between research and medical care.  But we try to make the information as accessible as possible for the participant to bring to their provider. 

Jeremy Veenstra-Vanderweele:
That's a useful principle.  Holly, I wonder if we could go to some of the questions from the chat box or the Q&A? 

David Blightman: 
Yes.  So, this is David Blightman speaking.  So, I've been looking at the chat box, and there's a question for Holly from Carol Ramos.  "Holly mentioned oversight of how All of Us data are used.  I'd be interested to hear more about that.  And is there in particular -- is there a particular framework they use for responsible data use?  How do they weigh different values/perspectives of researchers/participants, et cetera?"

Holly Garriock:
We do have a lot of oversight not only of our users in terms of the different credentialing that has to happen for them to gain access to our different tiers of data but also to their behaviors in the research platform.  So, we are able to see their workspaces and their peers are able to see their workspaces.  And we publicly post the work that they do.  So, it's a nice kind of auditing and self-auditing.  But we also have groups within the program that are responsible for that.  We also have the ability for the users of the data to be able to volunteer a review of their code and their workspace to see if it's potentially stigmatizing research, given our priority on underserved populations being represented in the data set. 

David Blightman:
Okay, another question that arose the Q and A is a question from Tim Marianna who asks, "How do we handle the risk of unintentional bias in digital tools?  For example, the machine learning algorithm may not by itself be biased but can provide biased output if trained on biased data."

Camille Nebeker:
I think that's just such a critical question because, you know, as most people know that the data that are used to train the machine learning process are creating the algorithms that are being deployed.  And if the data set is not representative of the people that stand to benefit from the algorithm, the AI, that will be deployed, we're really doing an injustice. 

So, I think about, you know, the use of the electronic medical record, and how it was designed for billing purposes.  Now for some cases like radiology, ophthalmology, there's some really useful tools and uses of AI that make it a compliment to the physician's role.  It is an assist. 

When it comes to mental health, there's so much more that we need to know before we deploy AI and do it successfully.  And in fact, one of the studies that I'm involved in was planning to use artificial intelligence to look at medication adherence, which in effect creates a weapon that could inadvertently discriminate against people that don't use their medications.  And instead of looking at the social determinants of why they don't use their medications, it could be working two jobs.  Having to get from one place to another, having -- you know, just so many things in life that influence our behaviors.  It's critical that the data sets that are being used really reflect the people that we're building them for. 

David Blightman:
Does anybody else want to chime in on this question or I can go to the next question? 

Luke Chang:
This is Luke speaking.  The bias question I think is really interesting, and also, it's a really difficult thing to figure out what the right thing to do is.  And I know it's certainly an active area in the machine learning community about how to deal with this.  Because on the one hand you want to be able to -- you know, we know from, you know, Paul Meehl and about, you know, actuarial versus clinical, you know, how we basically can make judgments, and how many dimensions of features we can actually use in humans.  Even clinicians who have lots of expertise, they're still not using that many dimensions, maybe not more than like three simultaneously.  Where we can have algorithms that can use many dimensions and also do really complicated things on how we extract features from that data. 

So, on the one hand, if they can -- if the goal is to predict something then we might not want them really muck around and muddle with the features because we don't really understand how everything interacts.  But on the other hand, we might want to download certain things.  Because in our cost function, while we might want to predict, we also might want to be minimizing bias.  And we don't really necessarily have a good way to put down our cost function in how we're training the models themselves. 

And then further, I think another thing that makes it difficult is multiple people throughout the day have brought up interpretability.  So, on the one hand if we can see how a certain feature like a demographic characteristic or something like that is weighing into the model, then you can basically choose to ignore that, or, you know, when it gets really -- if you're in these like deep learning things and you have all these non-linear complicated interactions with millions of parameters it becomes really difficult to figure out how to interpret it and how that bias is creeping into the model. 

Female Speaker:
Thank you, Luke.  I realize we are bumping in time, and we have the luxury of another hour for a broader discussion.  Before handing things back to Dilip, I just want to say thank you to our panelists.  Stimulating discussion, I especially appreciate folks acknowledging where there are questions that don't yet have answers. 

Dilip Jeste:
Thank you on my behalf.  Also, I think this because you've a great job.  Thank you all for keeping your comments in time.  So, it's one hour has been very productive.  So, back to Holly.  No. 

Sarah Lisanby:
Okay.  Well, thank you, Dilip.  Thank you, Jeremy, for leading the discussion on this very important topic.  So, the next hour is our general discussion.  And over the course of the morning and early afternoon, we've heard from a broad range of researchers from a variety of disciplines -- psychiatry, psychology, genetics, computer science, data science, engineering, just to name a few.  We've heard from representatives from industry, including the electronic health record industry, technology and data companies, and pharma.  We've heard from health systems like the VA.  We've heard from SAMHSA.  We've heard from professional organizations like American Psychiatric Association, nonprofit organizations, and patient advocacy. 

And you've helped us in a short period of time get a broad overview of the landscape about our future vision of delivering effective and personalized mental healthcare for all that works and is equally accessible to all.  You've helped us examine and take a closer look at existing networks and approaches to appreciate their strengths and challenges.  And you've helped us map out what are the issues in getting from where we are today to that envisioned future with a very nice discussion about how technology they not be the major challenge.  We also have to think about culture, incentives, ethics, and the human factors. 

To cue up our general discussion, I would like to go back to something that Ken Duckworth said, which I think is really worth underscoring.  He told us that the NAMI call lines are being overwhelmed with people calling with mental health concerns in the midst of the COVID-19 pandemic.  And he -- that's the -- that's our current reality. 

And he pointed out with that as a backdrop that patients are not well served by fragmentation in the mental health system.  He reminded us that the patients that we serve are disappointed by prior treatments that over-promised and under-delivered. 

So, with that in mind, I'd like to open this up to a general discussion.  And I'd like to ask each of us to focus on what do we need to put in place concretely to be able to build towards a future where multimodal data can inform treatment selection for mental health in a way that's accessible and effective for all.  And going back to the vision that Dr. Gordon presented to us in the morning, thinking about what role we can each play in achieving that future. 

The panelists are welcome to unmute themselves and participate.  You're also welcome to use the chat boxes.  And I've also asked our Q and A moderators from each of the sessions who had leftover questions to bring them forward so that we can have a fuller discussion.  So, I’d like to open it up there for discussion amongst the group. 

Greg Simon:
Yeah.  This is Greg Simon.  I've unmuted myself.  I think you can hear me. 

Sarah Lisanby:
Yes. 

Greg Simon:
You know, I'll get on one of my soapboxes that some of you may have heard me climb on before.  But, you know, to me it's really essential that we distinguish between, you know, privacy issues and proprietary issues.  And my experience is that those sometimes get blurred to serve the interests of, you know, particular stakeholders who, you know, basically would like -- and it tends to be more proprietary stakeholders. 

So, you know, one of the questions that came up earlier in the discussion is who owns the data?  And at least, you know, in my network or in the world I operate in we are very clear that health system members own the data.  You know, there's no question about that.  That's a legal principle that, and it's also a foundational ethical principle that that health system members own the data.  They are their data.  We borrow those data temporarily to learn from them.  We are guests in that house.  And that we have to behave as guests and always respect people's rights and try to communicate with them clearly about how, when, and where their data will be used.  So, that's pretty foundational. 

But then on top of that, when we talk about sort of the proprietary interests in what is learned from those data, I feel pretty strongly that that's public domain work.  And if our members of these large healthcare systems have said yes, you know, you may use the data that we contribute for research.  And when we've talked to people about this, they are pretty clear that they feel a strong obligation to help other people who might also suffer from mental health conditions.  But they have strong objections to that knowledge being privatized. 

So, I think, you know, we need a pretty clear distinction between there.  And you know, I think in terms of, you know, creating the incentives for, you know, privacy protection and public domain learning.  Or you know, as I sometimes say, if your code and all your results aren't on GitHub you should say it never happened.  That's my soapbox.  I'll climb down now. 

Sarah Lisanby:
Thank you for raising that and the topic of balancing access and open source.  Not only with respect to the data but also the algorithms that are published from the data, balancing that with the privacy issues. 

And you raise another aspect, proprietary nature.  We do have representatives from different sectors here, industry as well as academia.  And I'd love to hear some discussion.  Because I think depending on where you sit, it influences what you experience on that decline.  Just in terms of accelerating science, we think that open source data and open source algorithms.  Using things like depositing your data in a national data occupy, for example, these can be research accelerators.  But there may be incentives that -- or disincentives, let's say from people participating in that.  Can we have any comments on this issue? 

Greg Simon:
Yeah, well, this is Greg.  If I could just add one comment because I think there clearly are privacy issues related to data sharing.  Which our group is way into the weeds on and happy to talk more about.  But it gets very nerdy.  But there are no privacy issues related to sharing of code.  You know?  So, for us patient's data are behind a pretty high wall, and we have to be very careful.  But every line of code we have ever written is on GitHub. 

Sarah Lisanby:
And may I ask the researchers here is do you agree with those comments?  Or is Greg the outlier here? 

Luke Chang:
I'm happy to fully support Greg's sharing of code on GitHub.  Or at least the code.  The data, I think, is really complicated.  I think -- on the one hand I think -- I really appreciate his perspective that he -- where the patients own the data or the network owns the data.  And you know, it's the privilege of everyone who has to work with it. 

But then on the other hand, there's a bunch of things -- when you start adding -- this is more from my end user, not so much on my generating hat.  When there's really large barriers to accessing the data, it makes it -- so, for example, even, you know, the NIH's data sharing.  It's great that they're putting all this effort into it.  But then on the other hand it's -- in practice, I find it really hard to -- it's very hard to put data in, and it's really hard to get data out of it. 

And other things.  Like I've waited a year sometimes to be able to get access to data sets.  And then I've been rejected and rejected, or you have to jump through so many hoops.  You have to get a legal team in.  And not everyone has the resources to be able to do that.  So, then it's not really democratizing or whatever, however you pronounce the word, access to the data and the types of questions and things they can use with it.  So, I feel mixed about it, but I fully appreciate what Greg's been doing. 

Sarah Lisanby:
So, I think you're making the point that it's not just about researchers being willing to share their data, it's about making it easier to access, lowering the barriers for investigators to be able to use that data.  Any thoughts on this topic, data sharing and code sharing? 

Greg Simon:
This is Greg.  I was going to add, you know, I am a big fan of what All of Us is doing in this space.  And I wanted to sort of go back and underline something that was talked about related to All of Us.  Which I think is going to be a really big issue for the research community to think of. 

You know, it's that the specific sort of you know, All of Us idea that you can have access to these data, but you need to operate in public.  Meaning, you know, Jupiter notebooks, I'm getting nerdy about that or whatever system you're using to access those data, your code are available for public inspection.  So, that people can make sure you're not doing bad things. 

What that means is that some of researchers' proprietary interests -- you know, I'm able to ask questions privately and keep the answers private to myself until I'm ready to release them -- have to go away.  That would certainly accelerate learning and would break down silos.  But it's not -- it has not been part of our culture to say that everyone in the world can see the results of my analyses as soon as I can see them.  That would be a radical change in how we do business. 

Sarah Lisanby:
And that's another aspect or element of trust when you're talking about do people who provide the data trust the systems to keep it private?  There's also another trust issue of science.  Do we trust the researchers or the scientists to be transparent about how they're using data, how they're generating those results?  These are fundamental issues that are key to reproducibility and rigor in science.  It's something that we definitely care a lot about at NIH for us to trust [inaudible]. 

And let me just remind you as you want to make comments, please unmute yourselves.  And we do have Adam Thomas who is monitoring the chat box and Q and A box, if you want to make comments there. 

Jeffery Girard:
Yeah, if I could just -- this is Jeff Girard here.  If I could weigh in on Greg's point about the sharing of code.  I think, you know, as a general principle, you know, I fully endorse that.  But I do think that there's, there's nuance here of course, which he alluded to.  I think there's different reasons to share code, different purposes for doing so. 

So, there's probably a continuum between, you know, perhaps sharing your code in order to, you know, yeah, just show, "Hey, these are the analyses I did.  You can look and check for typos or bugs." And it's much more about that reproducibility, transparency piece. 

Hopefully there's very little kind of arguments against that.  It's hard to imagine what the downside of that would be.  I think the -- but there's also -- on this continuum there's also the purposes of sharing for like, let's say, developing software for other people to use, right?  So, let's say we want to implement this model that's going to, like in the type of work that I do, will detect facial expressions.  Now, other people are going to download the software, use it on their data, and I think that that has the potential to be an incredible accelerant to work. 

But it also -- I think with that great power comes great responsibility to make sure that if you're going to make something easy, that we're accelerating us on the right path.  That we're not going to charge down on something that's poorly validated or not working that is now going to sort of balloon into this whole area that's not very trustworthy.  So, I do think that, yeah, like the more you expect people to reuse your code, the more effort you need to put into validating it and, you know, commenting it and documenting it, things like that.  So, that's an important, I think, note to add. 

Sarah Lisanby:
Yeah, thank you for sharing that.  And I know that we have a computer scientists and more professional software developers who are participating.  And I think it might be worth discussing what sort of collaborations are needed across disciplines from the clinical sphere clinical research collaborating with the computer scientists or software developers or the engineering side to be able to be sure that those software tools are well-documented, able to be used by others, don't break when the operating system changes, and are supported.  And it's a big deal to support software.  I mean, typically companies support software. 

But there are open source examples of -- like Linux or things that are -- where the package is open source.  Can I hear some discussion across these disciplines about what sort of collaborations are needed? 

Male Speaker:
[inaudible]

Jeffery Girard:
Sorry, I'll just add really quickly that, before we go on, I think the collaboration piece is an excellent one.  I think another thing is funding.  That a lot of those open source software packages are really being supported with people working nights and weekends.  Like blood, sweat, and tears put into these things.  And I just think there's not enough funding for that type of really critical work.  So, but also the collaboration piece. 

Holly Garriock:
Was that, Guillermo?  Someone else wanted to speak. 

Guillermo Sapiro:
Yes, it is, thanks.  Thanks, Holly.  I just wanted to make a quick comment about sharing software.  And if there is multiple -- as what to said -- there's multiple reasons.  One of the ways to share software and data without sharing software or data is through cloud computing, something that we are exploring, and Duke and other universities are connecting their health system with the cloud -- PHI, HIPAA, everything. 

So, that will allow you at any institution to run my software without me having to share the software with you.  And will also allow me to test on your data without you having to share the data explicitly with me.  So, I just wanted from the technology perspective that a lot of the concerns that we are having are going to be improved with the use of cloud computing. 

So, security, transparency, who touched the data, who touched the code?  So, I'm basically -- that's actually the meeting I'm jumping in the next 15 minutes.  So, I just wanted the non-technology people to be aware that there is technology, not to address everything, but, yes, to address some of these sharing -- both of software and of data. 

And, as I said, there is multiple institutions that are leading and paving the way for that.  And I would be happy if somebody wants to know more to contact me offline.  But it's not just the old fashion that I have to send you my code that you have to install.  Many institutions and many companies are working very hard to make a standardization of this software for healthcare through cloud computing. 

Sarah Lisanby:
Thank you, Guillermo.  Those are very important points.  And this might be an example where there are already existing technological solutions.  And the problem is not the technology, per se, but the incentives that make the individual stakeholders unwilling to use those solutions because of wanting to keep things proprietary, either for research reasons or for reasons that their own industry financial interests. 

So, I'm wondering if -- we've been talking about data sharing, and I know we do still have our colleagues from Epic on the line.  When we think about the clinical data and [inaudible] regarding privacy of clinical data within our electronic health record system.  And of course, we have multiple electronic health records systems.  So, it's a bit of a patchwork of different systems that are in use in across the country and across the world. 

I'm wondering -- I'd love if our Epic colleagues are still on the line might want to address how they view this in terms of the idea of how we might share between electronic health records systems?  Or our colleagues from the NHRN or other [inaudible] networks that have addressed interoperability between and across EHR systems. 

Matt Switzler:
Yeah, so, I'll be the Epic guy on the line.  But, yeah, I think there are some well-established standards for exchanging information between health systems.  Those kind of fell out of the meaningful use initiatives.  One barrier I see the doing that is that a lot of behavioral health providers were actually exempt from those meaningful use and EHR incentive programs.  So, in our experience there still is a lot of faxing and phone calls and mailing to work with some of those organizations.  So, I feel like maybe from a national level getting some incentives there might be useful. 

Another sort of thing that I'm thinking through that we need to be thoughtful of is interfaces are great.  But I think we need to be thoughtful not to have, you know, 13 or 14 different standards that folks are programming to because then all of a sudden, your EHR vendors are going to be just programming interfaces all day.  So, I think one area that we use help from an industry perspective on is getting sponsorship of interface standards and holding different software vendors to those standards, which is kind of a tricky thing to do, especially as this is a rapidly evolving field.  But we will need interfaces so that we don't have to reinvent the wheel if organizations want to plug and play with data.  So, that's kind of my thoughts on that one. 

Sarah Lisanby:
So, thank you for that.  So, you're pointing out that meaningful use was meant to facilitate the sharing of health data as people move through different health systems.  But you come into that for behavioral health there's exemption.  And you commented that there might need to be some national incentives to overcome that.  And sponsorship of interface standards, holding software vendors to those standards, can you say more about -- for those of us who are not in the software and computer science space about this, what you commented, sponsorship of interface standards with software vendors?  What would that look like?  And in whose domain is that? 

Matt Switzler:
I think it falls somewhere in the software space in terms of who defines standard.  You can imagine two computer systems talking to each other, and they need to be speaking the language to have a meaningful interaction.  However, if you're expecting sort of one system to speak a hundred different languages that's hard to implement and support from a technical perspective.  Whereas if everyone was speaking the same language the data flows much more smoothly.  So, that's kind of what I'm referring to when we talk about interface standards. 

Part of that is a common sort of dictionary of data elements.  For traditional medical information, like medications or diagnoses, we have things like RX Norm and ICD10 and that kind of provides those standardized code sets.  But for a lot of the sets of information that behavioral healthcare providers care about, things like questionnaires or other social factors, those data dictionaries don't necessarily exist yet.  So, I think one thing that we could get help from groups like NIMH would be to have those standard data dictionaries.  Because then everyone's programmed to the same standard. 

Sarah Lisanby:
Great, thank you for that.  And I'd love to hear from -- a lot of our investigators who spoke earlier commented on the strategies they've taken to try to interface with the EHR and use it for improving mental health outcomes.  So, I'd love to hear some discussion between these groups about where the potentials are. 

Diana Clark:
Hi, this is Diana Clark from the APA. 

Sarah Lisanby:
Thank you, go ahead. 

Diana Clark:
I see -- I think one of the things that we have actually -- and it's something we've actually sat down and talked to NIMH about as well.  It's this idea of actually, I agree, identifying common data elements that can be used across the different home health systems and the different EHRs.  And I mean that is going to be extremely important for us to have comparable data across the different systems. 

And another way to do this, too, is -- in addition to identifying common data elements another thing that could be done is to -- even if different HRS and different systems are using, say for example, different patient reported outcome measure for the same outcome, for example, depression.  If you can figure out ways to create a crosswalk between those, so that even if one system isn't -- you know and limit those tools. 

So, even if I'm one system is using, for example, the PHQ9 and another system is using for the GDS -- because we know that for geriatric patients the GDS tend to work better for them as opposed to the PHQ9.  So, something like that if we can figure out a way to make that crosswalk, that would be really important. 

And that's something that I know we at APA we're trying to work towards.  And for example, working with a team from HRQ team, H-R-Q, to actually try to see how can we harmonize depressive measures across different registries.  For example, ours as well as the prime registry and then across different health systems. 

So, it's something -- it's -- we're taking steps towards that.  But, definitely, I agree with on the Epic person that we have to have common data elements, define the common standards, in order for us to have data that can be comparable. 

Sarah Lisanby:
Great, thank you, those are very important comments.  So, it's not just the software interface, but it's also the measures themselves.  And having crosswalks across the measures so that we can be able to look at the same thing, whether it's a measurement of an outcome or a clinical feature, across systems.  Because -- and it was in the earlier -- nicely mapped out some of the challenges that researchers face.  When you're trying to do research in the real world, you're having to make use of what is actually there.  And it may be a different type of thing that you would have measured in your conventional clinical trial type of setting. 

Paresh Patel:
Holly, if I can raise a point, and this might be a good point to intervene on that, there's an, as someone mentioned earlier, there's an engineering problem, but there's also a sort of a social-cultural problem.  And I think this is something that maybe Matt from Epic, you can speak to if you have enough line of sight to it.  But it's something like Cosmos which is, you know, Epic's attempt at a very large sort of common data model structure.  There are challenges there for using it for research. 

And I don't know if, Matt, if you have any line of sight?  What are the social cultural challenges to using that data for research?  Because clearly that data can be used for clinical purposes, but the rules of the road limit how it can be used.  And maybe, you know, just speak to some of the barriers to using that kind of data for research. 

Matt Switzler:
That might've been my phone, sorry, everybody.  Paresh, I probably don't have all of the insight into the Cosmos data set and some of the challenges there.  I do think there is an opportunity there for either usage of the Cosmos data set or a similar model.  Kind of the idea of Cosmos is that we use the same interface standards that EHRs used to talk to each other to pull into a common medical data set that Epic organizations have access to.  And the benefit there is that we don't -- we sort of have that common language and that common use of that data for organizations to use.  And we can do some of this more advanced research without having to worry about sort of the data translation components of it. 

So, that's something we're working on.  I'm not sure, Paresh, if you had sort of more targeted questions for me on that one?  I know I'm not the Cosmos expert here, but. 

Paresh Patel:
Sorry, I hope I'm not causing the feedback. 

Adam Thomas:
Sorry, I think that is coming off your line, Paresh.  This is Adam.  I'm hosting.  I just hit mute all everybody.  Try again, Paresh. 

Paresh Patel:
Okay, now, okay? 

Adam Thomas:
It's okay. 

Paresh Patel:
I was just speaking to, you know, early on -- and this is -- I think this is why I didn't want this discussion to be Epic-centric.  Early on when we were looking at things like cosmos, which are really large federated data sets, it was, you know, the challenge is institutional resistance to sharing data for research.  And I don't know how that's evolved over time, but it's one of those -- it speaks to how hard it is to get that data together across multiple centers, and that maybe there are opportunities there for developing a common framework for sharing that data. 

Sarah Lisanby:
You know, one thing I wanted to bring up is what is in the data set to begin with, and what is actually in the EHR?  When we think about mental health treatment, it's not just about medications.  It's also about psychotherapy and behavioral interventions.  And earlier in the day, we heard comment that we might know that a psychotherapy session occurred or was paid for, but we might not have in the EHR information about what type of therapy or the quality of the therapy.  And I'm wondering if any of our colleagues on the line who may have a background in this have any comments about what we -- should we be capturing something more about psychotherapy and cognitive behavioral interventions? 

Greg Simon:
Yeah, this is Greg Simon.  I was going to say I think this is a really critical issue, and it's a real challenge.  You know, and I think we probably need to consider it at multiple levels.  You know, one of those is how can we at least get providers to record in a more consistent way?  Specific sort of families or schools of therapy or therapeutic interventions to at least report those in some standard way.  And work with the EHR vendors, especially specialty mental health EHR vendors, about, you know, trying to people to record that more systematically.  Eventually, it may be payers who say, "You really have to record those data, you know, for payment." That may drive that. 

Another one that is very relevant this month is, you know, in many of our healthcare systems mental health care is switching to virtual care.  And, you know, many mental health providers are now using various technology platforms to interact via video conference.  That -- and I suspect some of that will stick after we get past this crisis.  There's an interesting question there because what it means is that now the actual content -- the content, the lexical content, that tone of voice content, and even all the video content of psychotherapy -- is now passing through someone's pipe when it used to occur completely in private. 

There is an interesting, fantastic, and also somewhat frightening possibility there of how we would mine those data.  And you know, I have colleagues who are very interested in those questions about how we might be able to mine the lexical content, the tone of voice content, and facial expression content from video psychotherapy sessions to learn about what actually happened.  It could be a relatively dramatic advance in understanding how and why psychotherapy works, but there are a lot of issues to overcome about privacy.  So, that's a -- but I think we have moved so far to virtual care in the last month that we will probably pull back from that somewhat, but I think it's going to be a much bigger part of our future. 

Sarah Lisanby:
Great, thanks for making that point.  And it was referred to earlier in our session as the three V's of telemedicine, which I had not heard it referred to that way before, but the verbal, the vocal, and the visual.  And as we are doing this conference right now, a virtual conference, we are communicating in those three ways.  And the degree to which that sort of information can be used to improve the health outcomes is certainly an exciting topic to explore.  Raising not only ethical and privacy issues but also technical issues of how you handle that data and what it could mean.  Any, any thoughts on that?  I know we've got some real deep digital phenotyping experts who look at these devices and other types of approaches. 

Male Speaker:
I had to mute all again.  Please go, thank you. 

Diana Clarke:
Okay, this is Diana from the APA.  So, one of the things that I -- around the idea of online psychotherapy, I think, we could probably learn from the Australians with that.  And, you know, so just to kind of reach out to see what are some of the results they've had.  Because since 2012 do know that Gavin Andrews and his group in Australia have been doing online psychotherapy and with video conferences.  So, we could potentially learn from them as opposed to, you know, starting from scratch.  Then so that's one thing. 

And then, the next piece I was going to actually -- oh, my goodness.  It's slipped from my brain.  I'll remember, and I'll come back to it.  It's gone.  Sorry, I guess I need another cup of coffee.  The second thing I was going to say is completely gone.  But we can actually learn from some of our colleagues in other parts of the world. 

Sarah Lisanby:
Yeah, so, thank you for that point in terms of the telemedicine, tele-psychiatry, online psychotherapy going on.  And I think the issue that was being raised was how to capture that data.  How useful could it be?  And how, if it's being captured, it might change the therapeutic interaction?  Any discussion on that? 

Diana Clarke:
Sorry, I'm going to go back to the issue that I was, I wanted to really talk about, but then I jumped in and mentioned that.  I agree with Greg that we do need for a standardized way that clinician, you know, the clinician to document things.  But I really do believe, and this is something that we hear from clinician all the time, is that we have to really make sure we balance the clinicians clicking off a box and selecting something versus having the ability to write their notes and, you know, free form notes.  So, we have to balance those two things. 

So, that's -- I wanted to say that first because that's really important.  And I know that's where you do find clinician opting out of certain things.  Because they find that they're just having to click all these boxes.  And it's supposedly interfering with their ability to do the clinical work that they're supposed to do.  So, we need to balance those two things. 

Sarah Lisanby:
I'm really glad you pointed that out.  We've talked some about patient burden, but what you're bringing up now is provider burden.  And when we consider the pressures on providers these days, as many of you will know, if it doesn't help the provider be more efficient and take better care of the patient it's probably not going to happen because of the time pressures.  So, delivering value to the clinician is clearly very important and being able to do that in a way that does not add to the burden of documentation. 

Peter Zandi:
Hello, this is Peter.  I'll just speak to that a little bit, because we were doing that with the -- am I on mute?  No, sorry.  We're doing that with the ECT clinics in the NNDC where we're trying to encourage harmonized documentation of ECT procedure.  And in doing that, we are trying to focus on two goals.  It being clinically useful to the providers and to the patients and to the actual clinical encounter.  And at the same time collecting structured and harmonized data from that ECT procedure that will be useful for the sort of research and quality improvement activities down the line. 

And I think that same sort of thinking can be applied to psychotherapy or other types of procedures.  And I think it is a worthwhile thing to think about.  So, it isn't what we think about when we're dealing with EMR data where it's garbage in, garbage out.  But we're getting good data that's not only good for clinical purposes but also for research purposes. 

I will say it's a lot of work.  I spent a lot of time with the clinicians trying to get them to think through what would be useful in the of documenting these procedures for them clinically but also can be captured in a way that will be useful for research purposes.  It's just a lot [inaudible]. 

Sarah Lisanby:
Thank you for bringing that up here.  And when we think about how we can use these streams of data, including the EHR, to inform treatment selection, we're discussing now do we have adequate information in the electronic health record to be able to make that -- to inform that treatment decision?  In the case of medications, it is common that you have the medication name and the dosage milligrams.  But in the case of procedures, like ECT or TMS or psychotherapy, you, you might have an indication that the procedure was done, but you might not have the dosage or enough information about that procedure to be able to have the system learn how best and to whom to use it.  Because it's like writing down, "I prescribed Prozac,” but not bothering to mention what dosage you prescribed.  And then, yeah, that's just not enough information to inform that. 

So, in thinking about what things we need to put in place concretely to build towards a future where multiple data can inform treatment selection for mental health -- this is one of the concrete things you're illustrating Peter.  Building into the EHR, sufficient resolution and documentation of the procedures that we're actually doing, and you're ECT example is one very good example of having done that.  May I ask you did you build that into Epic, and is that something that can be shared?  And do you see this becoming a standard on how to document the ECT procedure? 

Peter Zandi:
That's the hope actually, yes.  We're working with all our clinicians across the NNDC to come up with recommendations about what would be optimal for clinical purposes and hopefully for research down the line about these procedures.  And we hope to publish a paper to report on those findings and also to create tools -- particularly in Epic, whether it's a flow sheet or a smart form -- that we can provide to other clinicians so that they can easily adopt these flow sheets as a tool, a document, for ECT in their clinics.  So, the hope is that we can advocate for this becoming a standard way of documenting these procedures and spreading it.  And not just within the NNDC, but beyond as well to move towards a more standardized approach of capturing the structured information. 

And while we're doing this, I'll just go back to the point that Diana made that I do think it is important to balance without doing this in such a way that doesn't burden the provider.  And the provider still wants to document in their notes a lot of information.  So, we're trying to balance in good structured information with the flexibility to add details in their notes.  And, again, yes, we're trying to disseminate this in a way that would become more standard for other clinics to use as well. 

Sarah Lisanby:
Thank you.  So, we're thinking about what concretely needs to be put in place to accomplish this goal.  We've talked about a number of different the factors.  One being the literal software standards for interface across EHR systems.  We've talked about the need for data dictionaries and common data elements, the need to balance the provider burden so that the thing will actually be collected on for it to be feasible.  We've talked about open source sharing of the data and the algorithms and removing barriers to accessing the data and that and the algorithms.  And right now, we're talking about building up the dosage information, so that we have enough information regarding the interventions that are given so that we know -- so that we can link those to response. 

Are there -- what else do we need to put in place?  What needs to be built concretely to put in the framework where an AI ecosystem could be envisioned in the next five, 10, 20 years that would become part of a learning health system for mental health specifically? 

Peter Zandi:
I'll just -- this is Peter again.  Just two comments to build off of what was already brought up by a previous folks.  One was that I'd endorse the idea by Diana, work to establish how to do a crosswalk between different patient reported outcome measures.  That's a really interesting idea.  Because we do find different centers like to use different measures, and it would be good to allow them to continue to do what they like but allow us to bring that data together and use it in a larger data set, even though they do have different measures.  So, I think that's something that could be encouraged.  Work that tries to evaluate and establish psychometric crosswalks between these different measures. 

And then a second point that I think would be useful in terms of concrete steps.  I'm not quite sure how it works.  This builds on the point that Matt said.  A lot of these patient reported outcome measures, as he mentioned, they're captured in Epic.  But they're not -- there's not a standard coding system for them.  So, they live in these flow sheets or smart forms.  And each institution captures them in a different way. 

So, even within systems that use Epic, trying to bring that data together on the PHQ9 measures might be a little bit challenging; not impossible, but it's a lot of leg work.  But it would be made a lot easier if there was some standard nomenclature for pulling that data.  And I actually don't know what would be the way to drive that to happen.  How do we -- how do we incentivize adoption of new cloning systems for these other sorts of things that we don't have codes for, like low-inc, for example?  [Inaudible] curious, is there a way to encourage or incentivize that?

Greg Simon:
Yeah, this -- Greg, I was just coming on that point about standard nomenclatures for patient-reported outcomes.  It is an issue, and there is nothing equivalent to low-inc.  in our network, we have sort of adopted a standard nomenclature for representing PRO data that I think is relatively robust because it allows you to capture, you know, actual wording of questionnaires, motive administration, things like that.

It's a public domain.  You know, it's a -- it's a public-domain data model.  We put it out there for anyone to use.  And we have also built a sort of crawler tool that will troll through the backend of anyone's EPC database, since our systems use EPC, and find all the instances of common mental health patient-reported outcomes by searching texts of labels, questionnaire items, and so on.  All of that stuff, you know, is free to share.  I mean, we are not a standard-setting organization, so we -- it's not like we say other people should use that.  But we say, "We've got some stuff that we think works pretty well for us."  And you can have it for free; just contact me.

Peter Zandi:
That's great, Greg.  Actually, I didn't know about that, and I will be contacting you about that.

Greg Simon:
Okay.

Sarah Lisanby:
So, I'd like to do a quick time check.  We have about 15 minutes left, and I'm really thrilled to see that we still have almost 100 participants online.  You really stuck with us, and I believe that's a testament to the level of interest that there is in the community about these issues.  I want to be sure that we cast I wide net here.  I know that, Adam, you've been taking the questions from the chat boxes.  I want to let you bring forward any questions that haven’t been aired yet.  And also, other participants that we haven't heard from yet, want to be sure that, if you've been waiting for a break in the conversation, to be able to make your points, that you feel welcome to do so now.

Adam Thomas:
Thanks, Holly.  There hasn't been a lot of chat -- questions recently, but I would like to take the opportunity to pull in one from earlier this morning, that came from Javier Gonzalez-Castillo [spelled phonetically].  And I -- actually, I might add my own spin.  So, he noted that there was some discussion of wearable technologies, and he was wondering about priorities.  People could elaborate on what they see as the most important and high-priority data are to collect.  But I would like to expand that a little bit.  I -- to suggest, like, what other priorities could we add?  So, that would include genomics; that would include other biomarkers.  What do the folks on the clinical frontlines think that the larger categories of data are, that we're missing, that could really increase our prediction quality? 

Susan Lisanby:
Please unmute yourself if you'd like to respond.

Peter Zandi:
This is Peter again.  Sorry, I'll just jump in with a real quick comment here, just real quickly, that not so much a specific category of data, but what's interesting: There are a lot of these -- the general censor-type data that we've been talking about, digital phenotype data, whether it's mobile or wearable technology.  What I think -- but others can correct me if I'm wrong -- where there's a DRTHA [spelled phonetically] -- is connecting that digital phenotype data with the electronic medical record data to bring these two strands together to see how they correlate with one another and inform each other.  So, I do think that should be an area of active investigation: how to bring those two data sources together, to get at that multi-model that we've all been talking about.  I do think we're lacking there.

Adam Thomas:
There are -- I've noticed there's been, with the exception of maybe one or two speakers, relatively little discussion of genomics.  And given that the model that Dr. Gordon has suggested was to follow oncology, I'm wondering if people think that should be a priority.

Greg Simon:
This is Greg.  Can't help but make a snarky comment, which is, we have to have some -- we have to have some phenotypes that are all useful before we can go looking for genotypes.  And I worry that, in our mental health space, we're still more, like, looking for a gene for shirt color than for eye color.

Roy Perlis:
It's -- so, it's Roy here.  So, first, I can't help but point out that the first depression genes actually came from using really crappy big-data phenotypes, in part from health records and in part from 23andMe.  So, as much as we decry our psychiatric phenotypes, that did lead us to find the first genes.  I also can't resist point out that the FDA lists more than 100 drugs on its own site, including probably 30-plus psychotropics, where genes are in the label. 

I think -- I completely agree with Greg that, in most -- the -- you know, I like to have my hand in how about polygene where it's due, but Greg is right that there is precious little to go on with genomics, with the [inaudible] very un-sexy P450 stuff that we've had for many years.  So, as a field, we have P450, we know it affects drug levels, we know at the extremes drug levels affect response.  So, I actually think, you know, in general, it's a combination of -- no one wants to talk about P450 because it's old news, and no one wants to acknowledge that polygenic disease or phenotypes just haven't really worked very well in treatment response so far.  So -- and I'm agreeing with Greg, but so far, you know, the one counterexample is the one that no one really wants to do anything with because it's not sexy.

Male Speaker:
Right.  That's useful.

Dylan Nielson:
All right, can I also -- this is Dylan Nielson.  Can I just jump in and comment that, if the -- if the focus is purely on treatment response, as somebody said, the phenotypes get much, much clearer.  If you're only concerned with whether or not the patient subjectively feels better after they've taken a medication or had a treatment, it's a clearer question in some ways than what particular diagnosis they may or may not have.

Male Speaker:
[unintelligible]

Sarah Lisanby:
Dylan, I want to thank you for making that comment and also encourage our other NIH and NIH colleagues who are on the line to feel free to participate in this discussion, because we've really brought together a lot of experts from different areas within NIH, as well.  And so, your comment about -- it brings us to the focus of our discussion here, which is how we can use data to inform treatment selection.  And that focus is very specific. 

Now, if getting a better diagnosis informs treatment selection, then that's part of it, but our main focus, where the rubber hits the road, what we -- what -- wanted to understand better is helping clinicians to select the right treatment for the right patient at the right time and really realize this vision of personalized medicine for psychiatry and for mental health.  And so, as Dylan points out, if you have measurement showing before and after the person improved, that is a pretty clear phenotype, although I will say that you've got to decide what measure to use and reliably measure it and make that available.  But -- any comments on this focus on treatment, as opposed to diagnosis?

Peter Zandi:
So, Holly, one quick counterpoint to, the treatment outcome is easier to define.  In some sense, it is; in some sense, it's not so easy.  You were just alluding to it.  Over one timeframe, somebody could have the -- sort of a quick response, but are you looking over one year or two years?  Some people have partial responses.  And so, it's not so easy, actually, in these pharmacogenetic studies and these treatment prediction studies, to define the outcome in a clear and clean way.  There are challenges to it.  I don’t -- I do think it's easier than maybe diagnosis, but it still comes with challenges about how to define a good outcome.

Sarah Lisanby:
Thank you for raising that, and I know we have developmental experts on this meeting.  And when we think about defining outcomes, it is when and what stage of life because there may be interventions done early that have different impacts later on in development.  And so, I want to remind ourselves not just to be thinking about treating diagnoses or clinical conditions once they occur, but should we be thinking about interventions in a pre-symptomatic period that might be preventative?  Any thoughts about that?

Neal Ryan:
This is Neal.  Obviously, you raise an issue dear to my heart, except usually the interval you're talking about is beyond the five-year interval for a study.  So, they're a little bit hard to plan, [unintelligible], I think

Sarah Lisanby:
Well, just to remind us, in Dr. Gordon's introductory remarks, he did paint a wide vista, in terms of what timeframe one might envision doing projects like this.  And the timeframe, as I recall, he painted was -- could be beyond five years; could be, you know, 10 or 20 years; could be longer.  We want to hear from you.  What would it take?  What -- where should we be focusing?  What are your thoughts about what would be the most impactful way to use data to inform care for mental health, and to transform mental healthcare?  I encourage you to think really broad.  Don't necessarily put yourself in a five-year box.  Free yourself up to think broadly.

Neal Ryan:
That was the best being corrected I've ever experienced in the past long time, so this is good. 

Sarah Lisanby:
[laughs]

David McMullen:
Holly, maybe while people think about that, could I ask a question?

Sarah Lisanby:
Go ahead.

David McMullen:
So, I think I heard "FDA" for maybe the first time a few minutes ago, in a little bit of a different way, and I apologize if someone else has talked about it during this workshop, but what do people think the long-term strategy is, and what regulation and oversight that the think the FDA should or should not be performing in terms of those clinical decision support systems?

Sarah Lisanby:
Yeah, I don't think that we actually have addressed that question yet, and so, to be -- my -- since we have a multidisciplinary audience here -- it's David McMullen on the line -- can you tell us more about what you mean by clinical decision support software and the role that the FDA devises branch has with respect to regulating that type of software?

David McMullen:
Sure.  So, right now, this is something that the FDA is currently figuring out, but they do have draft guidance out on what they term clinical decision support.  "Software as a medical device" might be another term for people here.  But basically, if you have a product because they regulate medical claims, where you are stating that you can help diagnose [unintelligible] provide recommendations, then they are saying that they might have the regulatory oversight authority to regulate those.  So, you have to come in, demonstrate safety and efficacy, and bear sort of an objective third-party body for that. 

And so, I know I've heard different systems being developed, and I don't -- I haven't heard people talking about selling any of them so far, but for the long-term rollout of this and for clinical translation.  There will likely be an end product.  And so, do people think about how to interact with the FDA or how to design the studies that they can be evaluated by them?

Roy Perlis:
So -- Roy here.  I think this I actually a place that NIMH can be extremely helpful as sort of an honest broker.  Most investigators -- most non-industry investigators don't have the resources or wherewithal to engage with FDA.  Pharmacists have, you know, offices full of people to jump through the regulatory hurdles.  But I think FDA's been clear with its software as a medical device guidance about what they want to do. 

So, I think if NIMH -- if we -- NIMH wants to see more of these kinds of things that, you know, our one-funded investigators develop [inaudible] FDA, I think [inaudible] some support -- some sort of mechanism to help investigators engage with FDA would be really helpful.  As most of us are happy to get our models into the world, I don't have the time to hire FDA lawyers, and frankly, I'd rather put my software out in the world and let someone else worry about commercializing it, which is what's happened.  Yeah, this is [inaudible], and I really could [inaudible].

Sarah Lisanby:
Great, thank you.  So, the -- we're -- that's a concrete thing that could be put in place to help realize this vision.  Just to underscore your saying, mechanisms to help investigators engage with the FDA around issues like software as a medical device, and what that regulatory path would be, as well as facilitating partnerships with other industries that would then take and support that software, which would not be supported solely in an academic lab.  So, bridging with regulatory, bridging with companies, and facilitation of that bridging.  Did I grab your point right?

Roy Perlis:
Yes, exactly, and, given that there's already been toes dipped in the water with other devices, I think software as a medical device is a good place to go next.

Greg Simon:
This is Greg.  I was going to add, I'm fond of analogies, and one analogy I would like to use to talk about sort of the different stakeholders or players in this space is the weather world, because I'm a big weather nerd.  And interestingly, in the weather world, all weather data in the world are pretty much in the public domain.  We could go find them right now if we wanted, if we knew what to do with them.  Then, there's a middle layer of modelers and analysts who develop weather models, which is an interesting mixed space.  It includes, you know, nonprofits, academics. 

It does include some for-profit people, but that layer -- that layer is on top of weather data, and it -- anyone can access all weather data, and a lot of people's models are out there in public for anyone to see.  And then, there are meteorologists who communicate.  You know, who work on television or on the radio or something like that.  They are not modelers; they are not data nerds; by and large, they are communication experts.

At least, you know, the way I see it -- in terms of the way, you know, my group and my team operate, see it -- is, the data layer is the health systems.  We are in that weather modeler layer.  You know, we're trying to do things and put them in the public domain.  There need to be some meteorologists in this space who are about implementation communication, and that is maybe a different skill set in a different business model.  And those may be the people who bring those things to the FDA, and I am all for them. 

So, I -- you know, to me, it's an important sort of strategic question to think about, you know, which parts of this work line more in that for-profit world which is -- you know, isn't -- it works.  You know, but I do need to acknowledge -- and maybe I'm in the same boat as Roy to say, I'm going to live more in the atmospheric sciences world than in the meteorology sort of public communication world.  And I would hope that some of those TV meteorologists would take the models that I built and put them to good use to inform the public about whether to have their picnic or not. 

Sarah Lisanby:
Great.  Well, that's a really nice metaphor.  I want to do a time check.  We're about at the end.  We have about two minutes left, and I'd like to again ask if there's someone else that has not had a chance to make a comment.  And they -- if you've been holding back, this would be your invitation to do so.  And as you -- as you think about what comment -- what that comment might be, I want to go ahead and thank our moderators; I want to thank all of our panelists.  You've really covered a lot of ground is a short period of time, very helpful to us.  I want to thank our listening audience from the public who have signed on to join us, and to let you know that this is the beginning.

We will be archiving this, we will be transcribing this, and we'll be learning from the information that you brought forward to help inform what needs to be put in place to be able to build towards a future where multiple data can inform treatment selection and transform mental healthcare.  I want to go back to the words of Ken Duckworth, who had to step out.  Obviously, a lot of pressure's underway in terms of the mental health needs of our -- of our -- the patients that we serve, now more than ever. 

And we want to really provide a -- not a fragmented, but an integrated and cohesive system of care that can help patients thrive and get the cure they need wherever and when they need it.  So, let me just open it up to any last comments from my fellow program community members or anyone else.  And, hearing none, I want to thank you all again.  And we appreciate your input, and we are going to digest and think hard on what you've taught us today.  I know I've learned a lot.  And thank you for your -- hanging with us until they end, and I'm going to go ahead and adjourn.  Thank you so much.  Be safe, stay well, and take care.  Thank you.