Skip to main content

Transforming the understanding
and treatment of mental illnesses.

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

Workshop: Genes to Biology: Integrative Systematic Approaches for Revealing Biological Functions of Psychiatric Risk Genes and Alleles: Breakout Rooms

Transcript

Transcript

Trey Ideker: Talk, and people who don't like to talk will talk. But one thing I'd love, though, is to get just some of our panelists who haven't had a chance to comment yet, just to get your thoughts about anything that you'd like to comment on. Ed Boyden, I actually wondered if you have any thoughts on the last session or just more broadly in that area of neuroscience.

Ed Boyden: Yeah. Sure, a couple of thoughts. I mean, now we're talking a lot about throughput screens assays. What do we want to see? How do we see at scale? And how do we integrate the data? And so a couple of thoughts. One is that it's very interesting to think how different techniques and preparations connect to each other. Can they be integrated through the workflow? And we're talking a lot about different technologies. But can they be unified in some way? One thing we're just playing with in our group-- I don't know if it generalizes the topics here. But can we really build a pipeline where we do live imaging, life mapping of RNA and transcriptome and proteomic and other kinds of data, and then find some way to integrate all that data in a unified way over time? The other thing also is if we were to have [inaudible] screens at scale, are there any principles, thinking backwards from the goal, that might be fun to think about in terms of the resolution or the multiplexing or the throughput that would help us choose technologies or ways to integrate them? So [we've?] been thinking a lot [or?] thinking backwards from end goals lately and how technologies can be brought together.

Nicole Soranzo: I was going, in fact, to suggest exactly the same. This seems like a great opportunity to think about different types of assays and how to standardize them and make them robust in a comparative manner, so anything from electrophysiology. And then when you start to think about [layering?] genetic information in large numbers of people, this seems like a perfect opportunity, also, for the community, to come up with standards as well as platforms that they can take you to the next stage.

James Inglese: Yeah. Hey. I'd like to make a comment. I don't know how to raise my hand here except to do this. So I approach things from the pharmacological point of view. I'm always thinking of assays that can accommodate chemical libraries. And the problem with neuroscience has always been from my days at Merck all through NIH-- is you can't really generate the throughput you need to investigate the chemical library space that we have available to us today. So I was wondering if things like synaptic surrogates, for example, in a nematode model or something that an organism of that size, which we could actually do in high throughput screening with hundreds of thousands of compounds with full dose response relationships, could be a starting point for at least collecting chemical matter of interest to this community than to testing what I would consider all secondary assay type model systems that you've described here. Nothing is extremely high throughput that I've seen today for the kind of work I normally do. And I'm just wondering what people think about that.

[silence]

Trey Ideker: Just to comment about what Ed said. Ed, I think your comment really does provide a nice segway into bullet point three on the screen here that we're supposed to be discussing. You said, "Can you chain these tools together in some kind of pipeline?" And so far, as that pipeline leads from very systematic screens to in depth lower throughput technologies, that is exactly what bullet point three here wants us to discuss. I thought this was one of the more interesting-- this is my favorite bullet point, I think, maybe together with the AI, machine-learning one. And, "Does that require central coordination?" I think, is the other pretty interesting discussion. NIH, should they be matchmaking and creating these workflows programmatically, or do we all somehow need to do it? And to what degree are we already somehow doing it?

Steven McCarroll: Here's a question for both of you and everyone. Do we have the conventions we need as a field that have routinely interoperable data sets and such that algorithms like those that Alexis described and others can sort of surf across the whole field's worth of data? And if not, what would it take to get there? What would we need to do as it's-- what would we need as a field to agree to? How should that happen?

Trey Ideker: Alexis, are you on? You want to comment?

Alexis Battle: I am on. So the question is, how would we get to a point where we can agree to share things of that scale, Steve?

Steven McCarroll: Yeah. And also would we need conventions on file formats? And what would get us to where we need to be for--

Alexis Battle: I think a lot of--

Alexis Battle: --[crosstalk].

Alexis Battle: Yeah. I think a lot of computational labs are equipped to handle issues like minor differences in file formats and things like that, but just access and ability to download and share between collaborators more smoothly, something where anybody who had approval to get the data could share with each other, could share their processed version, could share derived data, and things like that in a more easy way would be fantastic. Of course, more consistency in file formats never hurts anyone. But I think that's something that we're more prepared to deal with ourselves in our own labs, and we can't solve the issues of just simply how to share quickly and seamlessly and how to get access and how to make it easy for people to upload and share their data as well. Right?

[crosstalk]--

Mike Hawrylycz: This is related to the point I was making at the end of the previous meeting. Yeah.

Alexis Battle: I'm not sure if Hae Kyung is also on, but she raised this point a little bit as well.

And I think [crosstalk]--

Stephan Sanders: I'll just [add?] my voice to that as well. The--

Hae Kyung Im: Yeah, [you?] did. Sorry. Yeah. I agree that easy access but the systematic-- or interoperability is important too because it is true that we can [massage?] and handle the data. But sometimes you just don't do it because it's too much work. Right? So, I mean, there has to be some kind of systematic effort to make data like ontology, yeah, so that it's easy for analysts or anybody to kind of ask the questions and be able to answer the questions right away. I feel like there's a lot of wasted time in formatting data from one lab to the other ones. And it's true. We can do it, but it's all a matter of bandwidth. We don't have infinite bandwidth to do these questions-- to answer these questions.

Alexis Battle: I think having conventions about annotations, things like sample IDs so you can easily cross-reference multiple data types from the same individual, and annotations about date of collection and things like that that we always sort of need to have, if there were standards on that, it would help tremendously.

Anne Bang: Yeah. More--

Stephan Sanders: Even [crosstalk] [would be?] a good start.

Anne Bang: More pairing of computational scientists and biologists too because it's a pretty quick learning curve that you find out if you want your data to be analyzed in this integrative way, then you have to generate it and format it in certain ways. And that's what I found people in my lab have learned pretty quickly.

Hae Kyung Im: Yeah. I think it's a much more difficult thing to actually implement it than think about it. It's like a general librarian or a Google type of thing--

Hae Kyung Im: I mean--

Hae Kyung Im: --that allows you to systematically, I don't know, [crosstalk]--

Trey Ideker: --this is why bioinformatics and Python skills are going to be huge bullet points in people's resumes for the near future. I think you kind of have to have that skill set, and you also have to have the knowledge-- to HK's point, you have to have the knowledge where these data sets are. I think for us, a big part of it is having the right collection of postdocs who can tell you, "Oh, I just read this paper, and that's a great data set. Why aren't you guys using it?" And that comes up in lab meetings. And I feel privileged to have that postdoc lab meeting. But is there a more systematic way to have a clearinghouse of these things? I mean, to offer a counterpoint, I get the willies every time these discussions about data formats and standards come up because I think, as some of you on the call know, the landscape at the NIH is littered with failed attempts or partially failed attempts to standardize things. And maybe it eventually happens but after hundreds of people spend years in rooms, who largely aren't biologists, by the way. I mean, it's a different breed these things attracts as well. And I think right now, in this meeting, for instance, the mindshare has correctly been on how you analyze the data because that's where we should be thinking.

Geetha Senthil: [inaudible], we're up to 121 people.

Program Support: We're ready to go.

IT Support: Good. [inaudible]. Yeah. You're not on mute.

It sounds like they're ready to switch.

IT Support: Oh, you're not on mut.

Lora Bingaman: I know. Yep. So yeah, we're ready to go with the breakout groups. So just standby for one second.

Daniel Geschwind: How long should we go now that we're a little behind? Hello?

Lora Bingaman: [Until?] maybe 10 more minutes.

Hae Kyung Im: Yeah. I agree with the incentives part. There's really not incentives to share the data in a way that other people can effectively use.

Alexis Battle: It would be nice if there were some bioinformatics stuff attached to major consortium efforts shared not just within a single lab or institute but really share the-- their job was to help put the data in forms that most people would want.

Hae Kyung Im: Yeah, and continuity. Right? You kind of train someone that knows a lot about things, and then they leave, and then you have to start from scratch. Right? And that's how we work with this funding.

Mike Hawrylycz: Yeah. I'd just like to say I think the problem is extremely-- it's an extreme problem because it's a fundamental kind of not acceptance of the need for infrastructure to support the complexity of the system.

Hae Kyung Im: But it's incentives. Right? You don't get paid. I wrote multiple grants [crosstalk] because--

Mike Hawrylycz: Right. It--

Hae Kyung Im: --it's not this kind of--

Geetha Senthil: Hi, everyone. Can you hear me? The breakout rooms are open. So you should have gotten an alert. I see one on my screen. So you have to click and join your breakout that you're assigned. So let's keep it to 25 minutes, and we may have to extend beyond 4:30. I apologize for the technical difficulties we faced today. Thank you for your additional 10 minutes of time. I hope you join the breakouts. See you soon.

[silence]

David Panchision: Trey, it would be good if you could pick up that conceptual thread again about data coordination because I think it's really important.

[silence]

[inaudible].

[silence]

Mike Hawrylycz: I feel like I'm being moved around virtual space here somehow.

Trey Ideker: Yeah. We just got kicked back in.

Richard Huganir: That was beautiful.

Richard Huganir: That was abrupt.

Daniel Geschwind: That hurts.

Mike Hawrylycz: Exactly. What the heck's going on in cyberspace here?

[crosstalk]--

Olga Troyanskaya: Yeah. It was like, "You're out." [laughter]

Olga Troyanskaya: [inaudible] before I can finish my--

Alexander Arguello: I hope nobody got dizzy, so.

[crosstalk].

Mike Hawrylycz: We'll actually might find ourselves in a Google kind of tech meeting. Who knows? [laughter]

Daniel Geschwind: It's kind of like Star Trek, being in the transporter…

Geetha Senthil: Thank you, everyone. I think we have all joining back the main room. So it's now  it’s a report out from the moderators of each breakout session. Please go ahead.

Alexander Arguello: We can start with Dan and Nicole.

Daniel Geschwind: Okay. My head's still spinning from that quick transition. But I would say we tried to get through all the questions. It was very difficult with the four minutes for each of them in the time allotted. But what I'm going to do is just very quickly summarize and let other people chime in and modify. So rather than going through question by question, I'm just going to summarize some of the points of agreement, mention a couple of ideas and a couple of gaps. So, loci…the key issue-- huge advances in GWAS and identifying loci, but the key is to identify the causal variance and the genes. And those two things are really key intermediate steps that are probably best done in kind of higher throughput and large scale. Of course, the effects of a locus and its function may vary over time and space, and that has to be brought into consideration. It adds a level of complexity that has to be recognized both developmental time and then the state of the tissue in terms of learning state or sleep, etc. Everybody agreed that integration is the most exciting area, this notion of the interface between kind of basic developmental biology and genetics and disease and kind of figuring out mechanisms that pull those things together. They really need each other to understand how genetic loci actually come together and work. Especially in the age of polygenicity, it causes a lot of challenges.

Daniel Geschwind: But the notion of bringing together the biologists and geneticists and modelers in kind of moderate size, MPI-type projects seem to be widely endorsed. And again, we can get into more details. But I think another agreement is that for studies aiming at studying kind of these complex diseases, whether it's an individual investigator, you know, unless it's a highly detailed study. But for this larger questions that are being asked here, a small N or N of 1 is kind of out for these studies. We kind of have to identify what that middle ground is. But kind of more N is going to help you understand in terms of number of genes, number of variance being studied. There has to be-- they're going to be different levels of analysis there, right, obviously. But kind of maybe one at a time is not the optimal way to go about sifting through these diseases. That doesn't mean you don't want to fund investigators studying a gene or a gene product. Of course, you do because you have to understand it. But in the context of the questions being asked here for a kind of broad-- translating genetics into biology, one should probably favor the larger N kind of studies.

Daniel Geschwind: One idea brought up about solving part of this issue of sharing information and stuff and ideas is to kind of use a Reddit kind of posting board, and that was Hae Kyung who brought that up. I thought it was very interesting. We all kind of liked it, where one can kind of have an ongoing posting and discussion and voting. There were a lot of gaps, and I'm sure a lot of the other groups are going to discuss them. One of the clearer issues is, although we kind of measure transcriptomes and we measure proteomes and we put all this together and identify eQTL and pQTL, the modulation of gene activity in space and time is a kind of dimension that hasn't been well studied. And that's going to be, again, really, really essential. And I think it also integrates with some of the other comments earlier. One of the issues is more understanding of human circuits. When you're doing a high throughput screen-- right? We can do all these screens now. And if you wanted to do a drug screen, what are you actually trying to fix at a neurobiological level? And for most of these disorders, we don't even know, in the human brain, what a normal millimeter of cortex really looks like at a molecular level or even at a synaptic level, although those kinds of things are being built, let alone how that might be disrupted in disease. Now, even if we did know that, some of that is going to be static information. But we'd be much better off than where we are right now where we don't kind of even yet have a human kind of ground truth that we are trying to model in many cases. And again, things that are going on at the Allen Institute and other studies like that are going to be very helpful there.

Daniel Geschwind: So kind of studying little bits of tissue with high resolution was one of the points brought up. And also, this notion of potentially one way to also approach this is to combine studies where you have developmental biologists working in model systems who have developed interesting assays and who have a lot of knowledge with kind of people who study the human genetics or human disease and kind of putting those together in projects where there's a kind of iterative kind of approach would be quite valuable as well. And that was brought up by Randall Peterson. So we agreed with the issues of kind of, "Consortia are kind of necessary, but you can't only have consortia." They're very useful for map making and for identifying key resources and building them, so especially where maps are needed. Protein assembly stuff was an area of interest as well. But I think I've probably exceeded my five minutes or four minutes. So I'd like to see if there are any other comments from the folks in my group or if Nicole wants to add something that I missed.

Nicole Soranzo: No. I think you've done a great job summarizing discussion.

Geetha Senthil: Thank you, Dan. So we'll move to the next panel. Who is next? Nevan?

Nevan Krogan: Yeah. Jennifer and I will do a tag team. I'll let Jennifer take the lead, and then I'll add at the end.

Jennifer Cremins: Oh, great. Thanks, Nevan. So first of all, we had great fun in the last half an hour and covered a broad range of topics. I think the emerging theme from our group is that people are most excited about this notion of linking things across length scales and that if we could go from genes to RNA to protein-protein interactions at the synapse up to circuits and then all the way to behavior and somehow link these length scales, that would really be, I think, where the sentiment is that the magic will start to happen in turns of discovery. One of the main points that was brought up is that it would be amazing if there could be synergy across diseases and if we could learn lessons from genetic studies in other diseases to begin to think about how to apply to a neurological disease. But a key barrier in taking this approach was very important, which is you can take iPS cells and turn them, for example, into cardiomyocytes as fast as two weeks. And they're quite homogeneous. However, in the case of the brain, we're dealing with an extremely long time scale in vitro, and this can be quite refractory to this notion of drawing parallels. And in fact, I think it was raised by  Pasca that people are turning sort of rapidly to direct reprogramming approaches for the notion that you can get semi-homogeneity? cell cultures as well as speed things up a little bit instead of having organoids in culture for such a long time.

Jennifer Cremins: The other big theme that emerged for us, which I think is probably going to be raised by everyone, is the idea that, really, it's critical to start with the phenotype and get the right cell type, the correct circuit, the right developmental window, as well as the timing dynamics. And I think we just had about five main points that we tended to focus around that the group, in general, thought were so very important. The first is the issue of time scale. So it was raised that, in the brain, in particular, activation of circuits is happening on timescales of milliseconds or even shorter. However, on the molecular side, we then characterize on minutes, hours, days, even weeks, and thinking about how not only to get the right cell type but then in neuronal activation scenarios, finding just the precise timing. And I, in my own group, have seen long range looping interactions and epigenetic marks that are indeed functional happening so quickly, and then they're gone. And so if you don't get the timescale just right and how to bridge with the activation timescale, the functionality might be missed. The second thing is that there was a quite vigorous discussion around taking a bottom up versus the top down approach with the idea that, "Should we be doing functional genomics if, in fact, we still aren't sure what synaptic property we should be assaying? And should we put all our effort now into protein-protein interactions at the synapse, characterizing them, and ensuring that the protein composition at the synapse actually matches what is expected and then seeing how that's perturbing disease and then, finally, using that as the quantitative trait to then begin to bring in these very elegant large scale approaches?" And I think the pushback from that approach was that, look, we can go both bottom up and top down and begin to meet in the middle to make some progress.

Jennifer Cremins: Another big area that we sort of focused and centered around is, "At which length scale to look in the brain to start?" An emphasis from many on the call that I thought was really-- I learned a lot from it is in thinking that looking at the synapses might be too granular. And in fact, they're so very plastic over time and undergo homeostatic plasticity that, could we even be sure that starting at that level could give us a meaningful readout? And in fact, perhaps we could start at ensuring we're working in systems where we know the circuit. And then my understanding - it went very quickly - is that once we know the circuit and then we can understand how a particular genetic variant alters the circuit, it might be possible to then begin to quantify things such as dendritic arborization or other type of dendrite size, morphology, or shape features. One last thing is that there was a closing sort of discussion about what we can do in organoids in terms of getting functional circuits, inducing plasticity, LTP, if we need to know the protein properties of the synapse before we even begin to think about inducing LTP. And I think the emerging theme that I took away was just how important it is for all of us as a community to think about the functional readouts and the synaptic readouts and the circuit readouts and which things we really need to hone in as phenotypic traits to begin to unravel the genetics and epigenetics. How about you, Nevan?

Nevan Krogan: You did much better than I could have. So I'd just quickly add about two quick things across different disease areas, which is something I'm personally very excited about. But obviously, the realization at the cellular level, there's complexity there. But at the molecular level, especially we even studied in vitro, some of these are the same proteins across these different disease areas. So more synergy across disease areas, I think, is something that's incredibly exciting and then, yeah, the big discussion about, "What's the right cellular model?" When I go into these neuro calls, that's most of what's discussed nowadays is, "Okay. What cell should we be working on? What model should we be working on, right?And I actually see it as a plus because there's different things to choose from. And it was brought up that networks are going to change depending on the cell type you look at. And I actually think that's an exciting thing. So I think we do have to generate these types of networks, including protein-protein interaction networks, in a number of different cellular systems. If we can do it temporally and spatially, even more exciting. And then the idea of just converging bottom up, top down, meeting in the middle, to me, that's a very exciting prospect in this particular area. So that's all I wanted to add.

Jennifer Cremins: Cool. It was fun.

Geetha Senthil: Thanks, Jennifer and Nevan. We should move to the next breakout group. Richard?

Richard Huganir: So that's Ellen and I. And Ellen's agreed to report out because I'll be leading the next moderation-- moderating the next discussion.

Ellen Hoffman: Sure. So we had interesting discussions that followed along the bullet point questions. Some of the points were already brought up from the other groups. But for the main points of discussion, first, regarding the public, private collaborations, I think everyone thinks that having more collaborative networks across groups, I think, is essential. And there's definitely a need for more sharing of data that Alexis, in our group, mentioned, that sharing more and sharing earlier is going to be very relevant. But this can also present challenges when trying to interface with the private sector, where they might have particular specific questions in mind that they will want to address. So I think Ed, in our group, also came up with a really interesting idea that if the NIH or journals that require the sharing of data more and earlier and also sort of serve as sort of a matchmaker between not only public and private but also sort of within academia, if there could be sort of a matchmaker process in that regard, that could also be helpful in fostering collaborations.

Ellen Hoffman: And the second question that we talked about sort of-- with regard to the second question, "How do we identify specific disease mechanisms that move\ from screens to mechanisms?" I think one question that we had some discussion about was, "Which phenotypes to focus on?" I think this has come up in a lot of discussions sort of, "What are the druggable targets if phenotypes can vary? Is any phenotype relevant?" And I think through our discussion, we came to the conclusion that we do anticipate that there will be convergence across risk genes, and we see phenotypes in multiple assays as being an entry point. Anne made the point in our group that any phenotype could really be an entry point into understanding a relevant disease mechanism. And that sort of gets to our third major area, which was, "How to prioritize genes? How do we prioritize the findings for more in-depth, lower throughput technologies?" And here, I think Ed and Stephan had some really interesting ideas about the NIH really sort of setting up a criteria for prioritizing genes that could be very helpful and informative for neuroscientists who maybe are less familiar with the genetic studies. And so Ed suggested sort of using the BRAIN Initiative as a model, as a guide for sort of building a consortium, and Stephan suggested maybe having the NIH suggest a list of priority genes that we could sort of group into sort of-- if you have a low throughput assay, which genes do you want to study versus a medium throughput versus high throughput? If you can study 100 genes at once in your system, which are the genes to target? And it might be helpful for NIH to provide criteria in that regard.

Ellen Hoffman: And that's something else we could probably discuss within the larger group discussion. And finally, I think the final point is really how important it is to integrate across different platforms and to integrate the genetics and neuroscience to really go from the molecular to the cellular, to the circuit, as I think everyone's been discussing. But also, I brought up the importance of really not forgetting the clinical data and the need for deep phenotyping of individuals who are affected with the different disorders and really to do that along the developmental trajectory and really to use our data integration to really go from the genetics and molecular all the way up to clinical population and deep phenotyping of affected individuals as well. So I don't know, Richard. There were other points? I think that was sort of a broad overview of what we discussed. Were there other points that I missed that you want to bring up?

Richard Huganir: I think that's great. Particularly, I like the idea of modeling the BRAIN initiative and some kind of organization, bringing groups together from different backgrounds to tackle specific genes or specific groups of genes at different levels and integrate online, in real time, the results, and share that as they progress.

Geetha Senthil: Thanks, Ellen and Richard. We move to the fourth breakout session moderators, Trey and Olga.

Olga Troyanskaya: So we're also going to tag team. I'll just start, and then Trey, jump in whenever he'll finish. We talked a lot actually across questions. And one of the key efforts that we think is really important in answering both the question of how to select and prioritize findings, what research mechanism would be useful, and even the question of integrating genetics and neuroscience-- we felt that what was really needed is a serious effort for data coordination, and perhaps that may be actually a perfect mechanism to do central coordination from the NIH perspective. We felt it was really critical to have the most open possible access and aiming towards interoperable file formats but, most importantly, having a lot of information on the metadata, etc., for these data sets as well as, while this may be a tall order, really thinking at least of starting an investigation into some form of knowledge graph structures to capture basically ourunderstanding with some depth to it.

Olga Troyanskaya: So there was a suggestion that perhaps have a study group to investigate this, but at the very least, really having serious metadata that will make these data sets available to people and possible to analyze. We were pretty open on the specific mechanisms of how this would be implemented as long as it really encourages open access and data sharing. But one specific possibility might be even something like an RFA that would ask for collaborative interdisciplinary teams at least for the question of selecting and prioritizing screens in selective areas. And one critical question that kept coming up and up again was really thinking in addition to the specific functional questions and genetic questions, the question of zeroing in into cell types and assays that would need to be prioritized. And what came up multiple times was the need to really not make assumptions of what cell types and contexts matter and really open that to investigation and let the data tell us. And that was sort of the key of questions that we discussed. And Trey, do you want to take over and round this out?

Trey Ideker: Sure. I thought that was a really good summary. And maybe all I will add is to this last point about, which is really bullet point three here, "How do we select and prioritize findings from systematic screens and funnel them"? You know it’s this funnel image being invoked here into progressively medium throughput and then lower throughput single gene protein molecule assays but those that have a lot of detailed information.Kind of like the discussion about the throughput of genetic interrogation versus structure that we had in discussion this morning. Those different approaches are along a continuum of systematic versus in-depth. Whereas one day, we do think, I would certainly think that one would like to contemplate something like a human genome project scale effort where you coordinate, you look at the function of these variants and these different neuropsychiatric diseases in a very coordinated way.

Trey Ideker: We definitely felt like we're not yet to the stage of the NIH saying, "You sequence chromosome 5. You center sequence chromosome 12," but that we still need to let flowers bloom. And so that kind of led to Olga’s proposal there to, "Could we contemplate an RFA along the lines of this bullet point three?" but where you specify key review criteria that the proposal has to meet but stop short of saying, these are the particular technologies we want to see. Again, maybe down the road, we can move to that, but we felt like we're not there yet. But we are in a very important era here of realizing that sequencing itself isn't going to give us the solution that we need to bring in an interdisciplinary inter-data-modal approach. So maybe I'll rest on that. And--

Geetha Senthil: Okay. Thank you, Trey and Olga. So I would now turn this over to Richard to moderate roundtable discussion.

Richard Huganir: Sure. So this final roundtable really is to come up with a set of concrete opportunities for short term and long term to make transformative advancements on how we approach genes biology at the scale of depth that has not been done before and identify what framework mechanisms-- collaborative science versus other mechanisms going forward. So that's our task. And so I really want to open this up for discussion to all of the panelists. I think we've had a great day, and I'll try to summarize it later. But I think if we think about maybe start the discussion on short term - so what can we do now? Is there anything we can do now to really start the process of interrogating specific genes or alleles and testing them? What is the best approach for that? And then maybe move on to the more long term approaches. I mean, I'll say I have a particular bias, which I've expressed before, that looking at the de novo mutations and the rare variants that are coming up in the current screens are the place to start and to interrogate them, especially for genes that we know what their functions are, or we think we know what their functions are. So that's something we could start to do now, and I'd like to open that up to comments. So I would encourage anybody to raise their hand and start this discussion. Okay. Stephan?

Stephan Sanders: So I think I got the impression across the groups that there was some appetite for an idea of a somewhat centralized list and some level of coordination in experiments. The vision I would have of this is that there is a prioritization of genomic loci. We can discuss how that would be with these different tiers depending on the scale of your throughput, and then there would be an RFA where people bring along their suggestion of what technology should be applied. I mean, I take the point that we do not know what we're looking for, where we're looking for, when we're looking for it. And so I think having a very, very broad camp of that technology. But by centering it around a defined set of genes, you'd have a set of data which would be very, very easy to integrate together in the same way that in GTEx, you go and look up a gene in the genome browser. You go and look up a chromosome. We would have those genes as being the central point. I obviously, as a rare variant person, favor the de novo high impact. But I think given the common variation contribution towards these disorders, I think it needs to be a group of genes which is orientated towards both the common and the rare variant. Now, I think one issue from this is going to be there are going to be winners and losers. There are going to be some people whose favorite gene get selected and some who don't. And there's going to be some people whose favorite disorder gets selected and some who don't. But what it would give us is a concrete set of data, which could be analyzed together and then also, hopefully, a route forward, which could be expanded to other genes and other disorders once we have an idea how to deal with this Herculean task of understanding the overall set.

Richard Huganir: Great. Right. So this is something that's come up in other work groups at NIH and something I've pushed at both the Stanley Center and NIH too to make a list prioritizing genes and alleles. And I met some resistance to do that. Maybe that resistance is changing with the progression in genetics. So it would be good to hear from NIH possibly. But Gavin, do you want to comment?

Gavin Rumbaugh: Just to say that I agree with you, Rick, and any other panelists. I mean, I’d like to say that I agree to the extent that I'd put my money where my mouth is and that I'd stake my whole career on a risk factor because of the definitive nature of the risk factor. And so I think it would be fantastic to prioritize genes that people are confident in that cause human disease and understand those diseases as well and then study them in every possible scale, right? In an organized way. So the question I had back to you, Rick, is, are you imagining an RFA at the level of program project, a small consortium, or a series of R01s? Because I would just say that a larger scale program project type consortium around a single risk factor to look at every possible scale, I think, would be quite exciting.

Richard Huganir: Yeah. I think that came out with, at least, our discussion in our breakout group, this idea of modeling after the zCell Census BRAIN Initiative programs, where they brought together many groups to collaborate and coordinate research in characterizing various cell types, etc. So I think that's a really positive-- beyond actually a program project? grant but a larger consortium that would approach different genes or different sets of genes. So, Trey?

Rebecca Beer: Sorry. I would like to pose a question to the group. I'm Rebecca Beer. I'm the program director for the Functional Neurogenomics program here at the NIH. So I heard, in the discussion so far, a lot of enthusiasm for doing these types of analyses with many genes at once. And I'm interested to hear the group's opinion about how prioritization of a gene list would affect these kinds of efforts. If we say that X, Y, and Z genes are 1, 2, and 3 on the list, is that going to discourage investigators who are interested in looking at 20, 50, 100 genes at a time?

Richard Huganir: I don't think so. I mean, I think-- because that's going to happen. Obviously, these are really-- probably, most of these disorders are polygenic disease, and they have to be looked at with many genes at a time. But I think it would help guide the general population, the general NIMH investigators who don't follow the genetics. I've heard many cases where people proposed to study this gene, and then it just gets rejected from NIMH. So I think some guidance-- I mean, it doesn't have to be dogma, but some guidance would be helpful. Trey?

Trey Ideker: Thanks, Richard. Yeah. No. I think this is a really interesting discussion and the right one to be having. And so the following is meant not as overly critical but just to kind of stimulate discussion here. But I didn't really understand, the more I think about it, what was meant when we say we want a multi-scale approach focused on a set of genes. So isn't the gene level one of the scales we're talking about? So if you focus on a set of genes, which I can see the merits of, inherently, you're going to have a hard time going multi scale, at least, as I understand scale. But we should talk about what we all mean by scale, of course. But as I understand it, scale would then also study going down in molecular scale to look at structure of a protein complex and a protein in individual atoms and then going upward to look at systems of genes and pathways and organelles in cells and tissues and organisms. And so when you say multi scale focused on genes, I question whether that's not an oxymoron. Can you have both, multi scale and focused on a set of genes?

Richard Huganir: Well, this is not just my idea, but I think people can chime in. But, I mean, I view it as-- many of us are looking at synaptic levels, but some are looking at, for example, mouse models or zebrafish or other model systems. We can't all do that in one laboratory. So having a collaborative funding mechanism where people could get around either a single gene or a set of genes and then work at those different levels and promote collaboration and communication between those that are at different levels is what the idea I'm thinking about. But others can chime in. Dan, do you have your hand up?

Daniel Geschwind: Yeah. I also just wanted to make sure that, I think, one does not want to, at all, squash the notion of studying a gene function, right, its structure, its complexes, and all of that. But I think if one's making a claim that one is studying disease per say and understanding how genetic variants influence disease, then the notion of studying more than one gene becomes more important. I know that it might be subtle, but I just wanted to make that clear because I want to-- that we're not saying that one shouldn't be studying a single gene function because that's how a lot of advances really come from, studying a particular receptor or transporter or something like that. Of course, yeah. But there's a distinction. Anyway. I--

Stephan Sanders: Could I respond to Trey's comment?

Richard Huganir: Sure.

Stephan Sanders: So the way I would imagine this working-- so firstly, there is great value in studying one gene and understanding its integrative , as Dan said. But thinking about ways of using the gene lists that we have?, I think looking across multiple genes is really where we can see this point of convergence, which, hopefully, is the true phenotype towards disease. The way I would imagine the scale working from a practical point of view is that you have a ranked list of genes. And if you have the sort of assay which is able to deal with 10 genes, you take the 10 top genes at the top. And if you have the sort of assay which is applicable to 100 genes, you take the entire list, and you basically fit where does your level of assay fit onto that ranked list and use that to guide you. Now, it might be that your assay is only applicable to a subset of those genes, in which case, still that ranking works. You just skip some because they're not applicable. And it would be a bit like in the ENCODE project, where there's a small number of cell lines, where you've got every single assay being done, and then the large number of cell lines, which you've got some assays done. And so it's sort of filling that matrix of all the different assays along one axis versus the 200 rank genes or 300 rank genes over on this side so that your body of knowledge starts with the top. And in prioritizing those genes, you'd want to make sure that you are getting different systems and different disorders and common versus rare being prioritized so that we could see the convergence across those main dimensions.

Trey Ideker: Yeah. So just to quickly chime in-- so that made a lot of sense. And I think what you just made me realize, at least, is that there's at least two uses of the word scale here. I was thinking about scale as biological scale, whereas I think when you use the word scale, you mean the number of genes. The scale of the assay. Right? Right? The scale of the assay and the scale of biology that's the target of that assay are two very different concepts. And so in your use, I think you're talking scale of the assay or technology. And that makes perfect sense. Some are able to measure all genes in a wide scale type of thing, and some are able to look at one protein or one residue in a protein.

Stephan Sanders: Could you define [crosstalk]--

[crosstalk]--

Stephan Sanders: --of the scale just to finish your thought there?

Trey Ideker: Well, so the way I was, I mean, understanding it from some of the talks and the conversation today was we have different biological concepts and objects of which genes occupy but one tier. And certainly, we have cells which is-- cell biology is a different scale of genetics, and molecular biology, which is a different scale of atomic physics. And that has to do with-- I guess, if you press me, that's the nanometer-- or the meter measurement of the object under study. And so a gene is, yea,] and a residue or a protein is on the order of tens to hundreds of nanometers, certainly, if you're talking about larger protein complexes. But then you can go to bigger objects and smaller objects. So for instance, we had a discussion - I forget; I think it was on the general call - about focusing on the synapse. Right? And so that was a proposal very similar to, I think, the one Richard made if I'm not mistaken. But there, the list would be organelles in so far as the synapses and organelle. And so the synapse obviously is a much bigger scale object than a single gene or a gene product. And that was how I was understanding the notion.

Richard Huganir: I think it's scale on both directions. Right? It's a scale of the number of genes your assaying and a measure of scale at the level of cells to organisms, etc. So I think, Steve, did you have your hand up? And Ellen, I don't know if your thoughts were expressed or not.

Steven McCarroll: I was just going to say, I think, what's just expressed, that part of the value of nudging a community toward working on the same set of genes is to create functional data sets that have cumulative value in which the whole is more than the sum of its parts, and it's possible to computationally look across many different kinds of functional analyses of perturbations of the same genes and sort of learn from that altogether. But then the other thing I wanted to mention is that ultimately-- I mean, for so long, polygenicity has sort of seemed like this curse, that it made it really hard to get to biology in these disorders. And it's been kind of true. I think it's made it really challenging to define paradigms, which I think is one reason why conversations like these ones today are important and why it is important to kind of let 100 flowers bloom because you really don't know what the optimal paradigm is yet. But it may be that in the end, especially now that there's so much more kind of high throughput, high dimensional biology, that we'll look back and that, as a community, we can make this polygenicity into a really good thing. So just to contrast to Huntington's disease, for example, Huntington's had what, for so long, it was-- for so many years, we’ve had so many conversations like, "Oh, if only schizophrenia was like Huntington's and it was really obvious what the gene was and you had a Mendellian gene of large effect that explained most cases."

Steven McCarroll: And Huntington's has that. It's had that for 25, 30 years, but it's still very confusing because that gene-- just because it's one gene, doesn't solve the problem because that gene does many different things, and it's been very hard to figure out which ones are actually part of the pathophysiology and which ones are just interesting but not part of the pathophysiology. And in Huntington's now, there's this new chapter of progress from the age-at-onset modifiers, which the phenotype actually is polygenic but is pointing in very specific ways to very specific pathways that might be much more attractable to therapeutics. And so I'm excited by the possibility that collectively, as a field, we may be able to kind of do something similar in schizophrenia and autism and really turn polygenicity into a strength by finding the convergence points of all of these different genes.

Elise Robinson: To add to Steve's point, I think you could help with that by selecting genes or types of genetic risk factors that may create risk for the same outcome but clearly, from genetic studies, appear to do so differently. So, for example, in the case of autism, already, we know that there are genes that create risk for autism at a genome wide significant level that pretty much never do so absent broader syndromic complications. And there are those like ANK2 that just kind of leave you alone otherwise for the most part. And likely, this is going to become the case for schizophrenia as well. And if you work across the genes that are associated, theCNVs, the polygenicity, leveraging what they have in common in light of all their differences could actually create a more informative data set. And I think in selecting genes, it'd be great to choose some that allow you to look at, "What do these things have in common? And what do these things-- how do they differ? Why can they create risk for schizophrenia while doing or not doing this other thing?"

Richard Huganir: So I think this might be a good transition to longer term views, so there are short-term and long-term strategies. So I think that polygenicity has strengths as expressed by Steve, Elise, and others. But I think, in my mind, that's a very complex process and will take additional studies. We've heard from Kevin earlier about cell villages to try to address polygenicity. So maybe we could turn to how to address polygenicity in the long term. So if anybody would address that or have other ideas for long term approaches? So, Dan?

Daniel Geschwind: I just want to add one point to that without addressing the long-term issue is that, to just emphasize Steve's point, there were also numerous, numerous, dozens of examples of drugs used in humans whose targets are GWAS hits. So because of the small effect size, it doesn't limit its kind of applicability to kind of being central. Effect size of mutation maybe isn't the same, right? As kind of a key pathway in disease. And I think we have to recognize that. Yeah. I'll just shut up.

Richard Huganir: No. Great point. All right. Other comments? So one thing we haven't talked about is sort of other model systems. And long term, I think we will have to go to other model systems such as primates or marmosets or other approaches. We have not talked about really phenotyping and patient stratification. These things are going to be absolutely necessary, I think, going forward. So I don't know if people want to address those issues.

[silence]

Alexander Arguello: Are people getting Zoom fatigue?

Elise Robinson: Four months ago. [laughter]

Alexander Arguello: You should have Zoom tolerance built up by now.

[silence]

Richard Huganir: I think it's only appropriate that the sun is setting across my face right now, so.

Daniel Geschwind: I have a question rather than a statement, and it's more to people who are studying basic mechanisms. If we identify genes, let's say, that we know cause, let's say, autism, intellectual disability, epilepsy, etc., maybe even autism or schizophrenia, whatever and now one makes a mutation in an animal or in a system, how does one approach understanding that kind of genotype to phenotype issue where just one basically isn't sufficient? And I'm not saying it's not. I would agree that it is sufficient to just understand what that gene is actually doing without kind of connecting it to some human behavior directly, given all the pleiotropy of these loci.

Richard Huganir: I think obviously this is an issue with NIMH, that it's hard to study schizophrenia in a mouse or a zebrafish. And so I think there has been a movement to study in primates, which are closer to us in behaviors, and to study them with some of these genetic modifications. But personally, I think that studying these genes in models such as zebrafish and mouse models are really critical to understanding the biological function. And that's going to play a role in how we approach mutations in these genes in schizophrenia.

Elise Robinson: I feel like it kind of goes back to Steve's point about what exactly is going on with the Huntington's disease gene that's of relevance to the disease, again, where one can leverage the benefit of having many genes and many mechanisms of genetic risk where you attempt to find the intersection and recycling points among kind of the broad mess of consequences that all these things can create.

Well, [crosstalk]--

Stephan Sanders: Not onlyfind them but quantify them too. Say, for example, maybe in dendritic spines, we're already seeing that kind of polygenic convergence. But trying to show how does that relate to, say, the microglia story or synaptic pruning story and be able to put an affect size on those across multiple genes. You can imagine that network being a way to really start to understand what are the common factors across multiple different loci and to what extent that they seem to matter.

Richard Huganir: Yes. And in our subgroup, Stephan brought this up. Do we expect similar phenotypes with these different risk genes? In genes that are involved with synapses versus genes that are involved with chromatin remodeling, are we going to get the same phenotype? And when we assay them in our different systems, do we expect that? Should that be an expectation? So, I've got Gavin and Helen?

Gavin Rumbaugh: Yes. Thanks, Rick. So I would argue the reason why we've known the gene in Huntington's but we don't understand the disease mechanisms because we don't understand movement and movement disorders at the neurobiological level. And so I think that that, to me, is why I personally think we should look at rare variants with huge effect and study them at multiple biological scales so we get a better understanding of-- we can come to a consensus as a field of like, "Well, these risk factors for autism affect principally these circuits in cortex and striatum relative to the rest of the brain," or, "These schizophrenia risk factors seem to affect these circuits in this area of the brain." I don't even think there's that kind of consensus yet. And so just understanding basic biological mechanisms that are associated with neural domains of these human disorders-- I think we need to understand, and I think the fastest way to get there is to take these rare variants that will come…

Richard Huganir: All right. Helen?

Helen Willsey: Yeah. Thank you. So most of my points have just been summarized. So I'll be brief. But especially what Elise said, each one of these large effect rare variants may have many different phenotypes when you knock them out in, for example, a model organism. And how to know which one of those is actually relevant to the disorder is hard. And to me, that's why Huntington's has been historically difficult because there's only one gene, right? But now with things like autism and schizophrenia, because we can look at many different genes in parallel, we can now figure out what are the common phenotypes. That's the key thing to figure out. And that will lead us towards the relevant cell types, brain regions, and things like that to be able to move forward. And so I think that organisms where we can do many genes in parallel will be very useful as a first step to figure out where to look. Right? And then you take those into a more sophisticated lower throughput sort of studies to actually get towards fundamental mechanisms.

Richard Huganir: Alright. Great. Ellen?

Ellen Hoffman: And so I want to come back to Dan's question because in terms of the animal model systems, I think that looking at basic biological mechanisms is going to be relevant for understanding the function of these genes. And I think most people on the panel would agree that I don't think you have to necessarily recapitulate a human behavior or a humanlike behavior in an animal for that to be relevant. I think that's clear. But I think that one of the challenges in terms of modeling risk genes in animal models is through are all mutations the same? And so, for example, when we say we're going to model-- we'll take the top risk genes, I agree. I thought we study high-confidence risk genes. But do you model then loss of function? Do you recapitulate a patient's specific mutation? Are all loss-of-function mutations the same? Are some dominant negative mutations? Again, how do we want to think about that? And also, it's likely that genetic background plays a role so that the same risk gene might confer risk to autism and schizophrenia. I think that that's-- I think that that's more complicated in terms of trying to figure it out. But I do think that in terms of trying to understand the basic biology of these genes or how they affect synapse function or synapse formation or affect particular cell types, will be relevant, and it's the role of the animal models.

Richard Huganir: Yeah. Again, I think that's why we need sort of a funnel mechanism where we're broadly surveying many of the risk genes and alleles and then only having consensus to generate these very expensive models such as marmoset models to screen them further. Alright. Other comments? I think we're coming to the end of our time. But what about looking long term in developing therapeutics? Because that's where we want to go. And to generate therapeutics, whether it's genetic therapeutics, small molecule therapeutics, how do we get there? We can do high throughput screens. We can test in animal models. How do we get to a druggable or a therapeutic?

Jennifer Cremins: Yeah. So I have some questions about this. It's an area that I and my team have been grappling with, and I just wanted to share what we observed in our data. We study genome-wide epigenetics, genome-wide looping interactions. And what we see in our iPS models of either common variant driven diseases or rare variant driven diseases across multiple disease states is that there may be a GWAS hit or a fine-mapped variant that, in turn, gains and attack seek peak and is thought to be so-called causal variant. And one could think about, "Okay. We should target potentially the causal variant that shows up in the GWAS and then gains and ATAC-seq peak." But then what we also see is that, in addition, there could be upwards, depending on the model, of 100,000 genome-wide additional gained epigenetic marks or 20,000 altered groups. And this is a severe unraveling of the cellular state independent of the genetics underneath these sites. And so I suppose I just wanted to learn people's perspective. Do we think that the GWAS hit is the driving event that then causes these massive epigenetic changes? Is it under consideration that, in these perturbation studies, we should also determine the direct functionality of the epigenetic marks that are not at the causal variant? And then what I'm leading to for this question is, so then which ones are the therapeutic targets? What if the epigenetic marks that aren't at the GWAS hit actually have a stronger effect size or are more functional? Should we be going to those as the therapeutic target? Or should we be going to the primary GWAS hit? And I don't know the answers to any of these, but we're certainly grappling quite a bit.

Richard Huganir: Alright. Comments on that? So Stephan, again?

Stephan Sanders: So I think it's very hard to imagine developing therapeutics until we can answer some very basic questions. For example, what epigenetic factors matter? What cell type is important? What species can you replicate these effects in? It's very hard to imagine a screen which you could have an endpoint and take forward until we've answered those kind of basic questions. I do think one point of early traction is in nucleotide-based therapies, so like gene replacement or CRISPR editing--CRISPR-i, CRISPR-a, some of those, towards some of the neurodevelopmental ones with the goal, first, of creating some benefits in a small number of people but asking the central question in autism of, "When can it be treated?" Which right now, it seems a major, major sticking point in terms of therapeutics. I think that's maybe less of a concern in other disorders where we have something like antipsychotics, where we know there is ability to modify. So in autism, the low-hanging fruit looks to be gene therapy approaches in a small number of genes probably in people who are more severely affected because that's going to be easier from an FDA point of view. But I find it very, very difficult to imagine screening for other disorders until we have a better idea of what we're screening in and what we're trying to see change.

Richard Huganir: Yeah. Of course, I would agree with that because I think that these are developmental disorders. Right? So one key factor, of course, is when we treat and how we treat. And so some of these, especially the haploinsufficiencies, are the low-hanging fruit in my mind, that there are relatively large numbers of the intellectual disability and developmental disorder mutations, de novo mutations, that lead to haploinsufficiency that could be addressed through antisense oligo approaches, which have revolutionized many of these disorders, hopefully, as well as possible genetic rescue. And I think that's a proof of principle in impacting cognitive disorders. It's the most rapid way to really show that we can do that but, of course, developing an idea of when to treat these. Do we have to treat-- at what age for intellectual disability, earlier for schizophrenia? Is it for the onset of symptoms? You know what I mean?. So I think these are really important questions. Alright. Any other comments?

Ellen Hoffman: I'll just say that, at the same time, the treatments that we currently have for many of these disorders, we really don't have a clear idea of the mechanism of how they're working. And yet, these are the approved treatments for these disorders. So I would agree. I think our goal is to achieve more of a mechanism based on molecular identification proof treatments. But at the same time, if we find a drug that modifies behavior that might be relevant, that may also be useful even if we have to study the mechanism as sort of the next step. I just want to put that out there, that that might also be relevant and be an improvement on what we have now as existing treatments.

Gavin Rumbaugh: Yeah. If I could just-- if I could just follow, I was going to say the exact same thing, that psychiatric disorders are defined by behavior alterations. And we know that circuits drive behavior. And so if we can figure out ways to modify selectively circuits that we know underlie certain behaviors, then maybe you can intervene downstream in a bunch of genetically distinct disorders because they may share common behavioral phenotypes. But it even emphasizes more the need to understand biological mechanisms that drive behavior.

Richard Huganir: Alright. So I'm getting a notice that we're coming to an end here. Geetha specifically asked, what do we think about next steps? 

Stuart Anderson: Would it be okay for one of the public to chime in? This is Stuart Anderson. I've been working on schizophrenia and neurobiology both as a clinician, as a scientist for 25 years. Could I make a point?

Richard Huganir: Make it very quick because we have 10 minutes until we're shut out again. We go back down the wormhole.

Stuart Anderson: Got it. So there has actually been a number of studies that have some clinical relevance, in particular, circuit related to the ventral hippocampus, papers in Cell, Molecular Psychiatry, Biological Psychiatry, all within the last year, papers in PNAS and Molecular Psychiatry within the last five years. A hypothesis elaborated by Tony Grace 20 years ago with a circuit involving ventral hippocampal hyperactivity activating the ventral tegmentum dopamine circuits via the nucleus accumbens in and around transplants, into the MG, into the hippocampus can correct psychosis related behaviors in 22qmice and in man-model rats and in cyclin D2 mice, so a variety of models that all share this ventral hippocampal hyperactivity. And humans that are psychotic have anterior hippocampal hyperactivity. I simply bring this up that there actually is targetable circuitry and the idea that I think is, from my perspective, that the future is likely to hold this kind of focal circuit based gene therapy using chemogenetics rather than in and around transplants probably, as the more recent studies are using chemogenetics, is likely to be one place that there's going to be progress in the future. It's not going to solve every problem a person with schizophrenia has, for sure. But it might make their quality of life a lot better. I just wanted to say that.

Richard Huganir: Great. So I think this is a-- we have not talked about these really circuit-based approaches. Obviously, all these phenotypes are circuit based and using approaches to directly target the circuit rather than the gene or the allele. So we have just a few minutes, and I just want to-- and obviously, I can't summarize the whole day, especially after seven hours of Zoom and the sun setting in front of me. But I do want to end on an optimistic note. I think we're all here because we think this is really a turning point in psychiatry in general - at least, I do - that we've come a long way in the genetics, that we are really getting tractable genes and alleles that are risk genes with high effect sizes and some causal, especially with autism, intellectual disability genes. So I think there is amazing progress. But this is not going to be easy going forward. I mean, I think that's why we're all here together, that we need approaches at many different levels to address these. We still don't know the appropriate phenotypes in our various assay systems.

Richard Huganir: But we have, I think, the beginning of a hold on mechanisms, certainly, that focus on both chromatin modifiers and on the synapse but also on inflammatory pathways. And so I think we can start to interrogate these at many levels, I think, both at the single gene level but also, as we talked about, the polygenic level. So I think this is going to require, as we've discussed today, many different levels from two dimensional cultures, organoids, different model systems. Each different model system brings different strengths. And in many ways, we don't speak the same language. This is something coming as a biochemist, molecular biologist learning genetics. And luckily, I've been learning it from some of the best people of the last 10 years.

Richard Huganir: And so I think that's an issue for us as well, to start interacting and speaking the same language. So I think if NIMH can really help promote these interactions and consortiums that could address this at all levels, that would advance the field tremendously. So I think that's maybe the next steps is really to ask NIMH to help us and really-- help us communicate, interact, to share data and approaches. And I think the PGC is a great example of how coming together really has made a big difference.

Geetha Senthil: Thank you, Richard. And thank you to the chairs, Anne and Steve,for working with us for the past six weeks now developing this agenda, identifying speakers, panelists. So thank to you both, for playing a big part. And thank the speakers, moderators, summarizers, and panelists for spending so much time in this virtual space and making this a productive meeting. Really appreciate your time and all the incredibly useful comments you made and the questions you raised. Steve, Anne, do you have any final comments before we adjourn?

Steve McCarroll: I wanted to thank everyone for contributing their time and their ideas. I couldn't be happier about who participated or how they participated. And just seeing just the ferment of ideas in the field and not just the new genetic results but the new kinds of ways of analyzing biology that people are making possible and the collection of ideas and ways of thinking around this.=It’s hard not to be excited about the possibility and to feel like this is a moment of real opportunity.

Anne Bang: I second that. Thank you to everyone for participating. It was a great day.

Trey Ideker: Bye, everybody.

Bye. [crosstalk].

Geetha Senthil: Bye. Thank you all. Bye.