Skip to main content

Transforming the understanding
and treatment of mental illnesses.

2022 High Throughput Imaging Characterization of Brain Cell Types & Connectivity-Day 2, Part 1

Transcript

ERIN GRAY: I'll let you know when you have about four or five minutes remaining. And so again, please do enter those questions into the chat. So our first panel summary today will be presented by doctors Dirk Keene and Mercedes Paredes from the session on acquisition, processing, and characterization of diverse human brain specimens across the lifespan for cellular resolution imaging. So it looks like I can see your slides. I'll turn the floor over to you. Thank you.

MERCEDES PAREDES: Great. Good morning, everyone, a late morning or early morning. We had a very vivid discussion and, I think, a lot of participation, especially from our neuropathology colleagues and researchers, trying to think about how we can best coordinate what's coming into our pipeline. Before we even start to think about the experiments, we need to make sure that we have high-quality samples and well-represented samples. I will start that one issue that we didn't discuss but I think that is important to highlight from the start is ensuring that the samples that we are acquiring and incorporating into our experimental pipelines are representative of the communities that we are interested in and are broadly representative of all populations. And there's a lot of efforts through developmental GTEx and within the BICAN on thinking about this. We didn't touch on it so much, but I did want to highlight that in the beginning, and it's still the topic, I think, of ongoing efforts and a lot of energy that's coming into it. If you go to the next slide, Dirk. Thank you. But I will highlight kind of our approach. So what we wanted to do in this session is to summarize the current approaches that are being used and use that as a starting point to think about how we can improve. And our goals for the panel were to recognize what are the needs both today and in the future for research applications. And we had a lot of brainstorming of what might be coming in the pipeline of things we aren't even thinking about now, but we need to be prepared for. And we hope to just think about some recommendations and guidelines in terms of how we can best prepare our samples. So next slide, please. So thinking about the tissue acquisition, the first concept that we touched on, the first question, was what is was our eligibility criteria? What's coming in? And we wanted to think about what were the biological and technical parameters that we should be considering? Broad age range distribution, sex, gender representation, demographic representation. And then also think about how do we reconcile diagnostic needs with research needs? These are brain samples that in many cases have to go through formal autopsy for diagnosis. So how do we meet that important need, critical, with the research needs be able to study regions of interest whole hemispheres?

And I think one thing that jumped out to us at this panel really was, at the forefront of this effort were the neuropathologists. And I think a lot of importance was put on, not just the role of understanding and identifying the cases but the importance of really evaluating each of these cases, that we needed to have, not just acquisition of the material but a full but clinical and diagnostic picture of what the material we were getting in, a clinical diagnosis, the neuropathology review. And next slide, please. And this was fundamentally important because often times many of us as researchers are looking for, quote, "neurotypical", the control tissue that we are wanting to understand. But we really need to define what are the limits? What are the parameters that really fall within neurotypical? And that was highlighted by Harry within the panel. He presented this slide that really talked about the importance of really determining, is a sample that we are receiving truly neurotypical? He mentioned the 56% of the cases that are described as, quote, "control", actually failed to meet the criteria after a full neuropath and clinical workup,

that there are other issues that actually show other features, information and maybe microchanges that weren't detected on gross evaluation. So without that detailed analysis, those were misidentified. 46% of the cases that we would describe as Alzheimer's in either intake failed to meet criteria for AB on full neuropath review but could have other comorbidities, for example, the older patients that have a neurodegenerative condition and a vascular condition. So there were other pieces that Harry mentioned, but I think, again, this is highlights that at any entry point, these tissues really have to have an evaluation so that we really understand the material that we're getting and getting that full metadata perspective of the samples.

Next slide, please. So ultimately, one of the first recommendations that came out was that our samples really need holistic and complete neuropathological evaluations, and that sometimes this could be challenging and would require special handling accommodations. For example, a lot of these final and more complex neuropath evaluations take more time, so needing a process to be able to hand off tissue for research, we keep track of it such that a diagnosis, when it's ultimately made from the diagnostic tissue, can return and still be identified by the research team and be given that full complete information. Next slide. This led into a concept that many people brought up, and I think just to highlight that, we really need to be cognizant of what material's coming in because our data's only as good as the material that we receive, and so this idea of garbage in, garbage out was highlighted by multiple people, that if we don't have a really strong set of criteria of how we're evaluating our tissue and getting a full metric of what our tissue is, we really won't understand both the nature of the tissue and the limits and of the limitations of the tissue, the caveats of our tissue. And that's going to influence any interpretation of the data that we get. Next slide. Again, this concept of how do we define what is neurotypical, or as was mentioned earlier, sometimes what we deem as control, and when the concept was this balance of really wanting to be stringent and making sure that we bring in high-quality material, but we still need to have-- we're still so early in this process, so we need to understand the limits, understand the full variance of our tissue. So we need to collect many cases with as much metadata, the neuropathology, other aspects that we'll talk about, any information biological and clinical information to be able to fully understand the variation and determine of what is normative within these samples versus what is pathological, and I think those will start evolving as we get more cases in and really understand the tissue.

A lot of discussion was had about how conservative should be within our tissue acquisition? When should we put the no, the block, into saying this tissue is allowed in our bank? And there was no consensus because, again, I think we're still so early. Some even raised the concept of only throwing out the tissue if it's really degraded and then bringing in and highlighting the tissues that might have longer postmortem intervals or other issues but still being able to process them to understand their limitations. Also the concept of should we develop techniques to take advantage of more tissue? Are there ways that we could try to modify our techniques to take advantage of samples that might have a longer postmortem interval where the RIN score might not be as good? So we ended up with this idea that there has to be a give and take, and the need to be testing a lot of our protocols, even thinking about supporting and encouraging a report on testing and seeing the nature of the protocols that we're using and how they apply to human material. And that was, next slide, our last recommendation. So we really need to have a sharing and kind of a consortium for testing of protocols and seeing what are the limits, limits of the protocols and also the limits of our human tissue and kind of disbursing this knowledge so that other people can learn from these efforts. Next slide. Another concept that came up with wanting to understand the metadata and how we're bringing in the tissue is really wanting to gather the genetic information from all our samples. There was a question of what is the current status of genomic sequencing for cases, and that was variable. And it came out that probably only a subset of brains entering most of these banks have sequencing data. And so we want to really understand what are people already planning within their projects and what data can a bank provide-- for those projects that are collecting genetic sequencing, what data can be provided to users, to the end users so that they have that important information? Everyone thought that sequencing was very keen and could also be a way to help with the diversity effort because that could obtain additional information from each case, such as ancestry information.

One thing that was also highlighted was when we receive any sample, especially for getting genetic information, we want to make sure that all our consents are up to date. And it was highlighted for the neuro-bio brain bank, consent there is very, very broad, specifically covers genetics and allows for open access to de-identified data. And that's really important. We want to make sure that all collections efforts in this consortium has that kind of language, make sure the consents are appropriate. And then lastly, the concept of do we need to go back and sequence some of the archived data? Do we have the material for that? And how will that information fit in with the new BICAN collections that are coming in? So our recommendation ultimately came that we really want to urge that genetics be assessed for all cases that are coming in. And especially now, a priority. Now, the next question or category that we were thinking about was how do we characterize and use this tissue? What are current techniques that are being used and what are the techniques that we see coming down the pipeline that we need to prep and better store our tissue to allow for future experimental use? A common theme that came out was most of our banks either are fixed tissue with paraformaldehyde or formalin or flash freeze for a lot of the omics work that are being done. And there was discussion about, are there other ways to prep tissue? For example, there were some efforts in the past with profusing tissue. Because that can improve the quality of the tissue. But obviously, then that would be fixed tissue and not allow for fresh frozen usage later. And so we needed to really have continuing efforts to think about how do we prepare for future experimentation. The other thing that came out was all this effort was really going to require anticipated planning and preparation. Not just knowing when a case is coming, when the tissue is coming, but also if you have the resources, are there special reagents that need to be prepared?

The concept, again, of needing to just collect, collect, collect as many brains as possible arose again. Many brains will have to go through this pipeline, especially if we're testing different ways of processing and storing so we ultimately get usable material at the end. And so this concept of needing to collect a lot of brains to get the end needed to test protocols is really important. And we also needed to recognize the challenges of making real-time decisions in how to collect and process. So this led to kind of one of our biggest recommendations, I think - and, Dirk, the next slide - that we need formal ways for increasing communication between the brain banks and the neuropath teams to communicate with researchers and the end users so that there can be an understanding of what is currently available, what is currently feasible, but also kind of looking to the future of brainstorming in a preparation for thinking about what's the work that's going to be needed, to be required for future protocols, future studies? And also to think about are there specialized needs? Each bank might have an institutional project that is unique, and the team and the banking team will want to support that. But the communication has to be present. And so really encouraging that interaction, I think, will help, to help this pathway and bridge the tissue acquisition to actually getting into experimental pipelines. Next slide.

ERIN GRAY: You have about two minutes left.

DIRK KEENE: Yeah, yeah. I'll go quick. I was just noticing that. So the last two topics - and I apologize, the sun is coming up in Seattle; it never happens this time of the year - is the need for postmortem MRI. But more broadly, how are we characterizing these brains as they're coming into our pipeline? And there's a number of efforts that are already happening in the existing BICAN grants. I know that a lot of the applications that are going to come in with these new imaging techniques are going to come with a lot of imaging already. And the recommendation is just to try to incorporate as much of this as possible. On the right is a figure from the lean UM-One, and that's kind of how that tissue pipeline is processing through that UM-One with the goal of having some level of imaging, either with rapid MRI or 3D scanning and volumetric reconstruction if you can't actually get a real MRI, that will help us get these brains into Common Coordinate Frameworks and Parsley, the anatomical regions to help ensure precision sampling. That's a really important concept. And generally, we want to build on what's already being done in the existing BICAN groups to the extent that we can, and then as we innovate and bring in these new technologies to complement what's happening in the existing BICAN. So that's going to be really important. We also had discussions about other modalities to try to characterize these brains. I think the goal is let's know as much about these brains as possible as they're going through the pipeline. And then finally, a really critical issue is how do we distribute the tissues? And again, I think one of the recommendations is let's take advantage of existing infrastructure. There is existing infrastructure from NeuroBioBank that is being modeled for a lot of what we're doing across the BICAN. We are going to need to have specialized protocols for some of these applications that may require specific tailored approaches at certain banks. But we need to have a common framework for how we understand the tissues that are coming into BICAN and help understand what's available for researchers. And that's already being developed, and we need to emphasize the importance of tracking and sharing and making sure the resources are as widely available as possible in addition to the associated metadata.

And so finally, this is a summary of the recommendations. You've heard most of this already, but basically, I think it's really important. The theme of this session was there's limitations on what we can do from a neuropathological perspective. There's limitations and requirements that we need to really get the most out of these brains. And it's incredibly important for the brain banks, the neuropathologists, and end users to be working together, to try to meet somewhere where we really can advance the best science for these brain donors who have given a huge gift to science with their brains. And so I think I'm just going to stop at that point. We'll have the slides, and this is, I'm sure, recorded. So I'll stop there and I'll stop sharing. And I don't know if there's time for questions.

ERIN GRAY: 15:36 We don't have any time for questions. But I appreciate that you were answering questions in the chat, so it looks like we did get caught up. Yeah, as Dirk mentioned, keep the conversation going after this session and after today. But also, you can keep entering questions in the chat as well. But I will go ahead and introduce our next speaker. So, Dr. Douglas Richardson and Elizabeth Hillman who moderated the session on brain tissue clearing and multiplexed molecular labeling and scalable reagents. So we can see your slides. So I'll turn it over to you. Thank you.

DOUGLAS RICHARDSON: Thank you. So we had a very large group of people yesterday in our session, and we had a great discussion. We essentially split the session into two halves. We spent the first half talking about tissue clearing and the second half talking about labeling. So Elizabeth and I-- I apologize there's no pretty pictures on any of the slides, but Elizabeth and I tried to break this into two sections for each of the tissue clearing and the labeling, just sort of a review of where we stand now and maybe some of the limitations and then some hopes and maybe promising techniques and things looking forward in both of these areas. So our discussion on the tissue-clearing side was actually quite positive. The tissue-clearing techniques seem to be at quite an advanced stage at this point and have reliable processing. However, the consensus still was that the idea of clearing an entire intact brain is probably unlikely in the near future, and the consensus was that human brains will need to be cut into approximately five-millimeter thick slabs. We can do a good job of clearing those fairly homogeneously and image those from top to bottom. There's definitely a lot of great knowledge in the group, people that have already attempted human brain tissue clearing and so they could feedback some information on that. It's definitely obvious that this is more difficult to clear than the most tissue that most of these techniques have been developed on. This is primarily due to the increased density of human tissue, lots of extra autofluorescence, especially from lipofuscin, and then just the general structural variability in human tissue, which we heard about in some of the keynote talks yesterday. And as well on the preparation side as we just heard from the last group, these samples can be prepared in many different ways before they get to people who are going to do the clearing.

The other thing that we spent a lot of time talking about was, what is the actual use case for this? And just within our group, I mean, there's people who want to look at many targets, there's people who want to look at very few targets, there's people who want to look at abundant molecules, biomolecules versus sparse biomolecules or ones that are poorly or low expressed. There's people that want to look at protein; there's people that want to look at mRNA. So it's really clear that although we have these protocols out there for tissue clearing at these points, depending on what you want to look at at the end of the experiment, those current protocols are going to need to be tweaked and optimized, especially stuff that's just been done in rodent brains. These are going to need some extra optimization and customization to be able to work in human brains for these wide variety of possible use cases. Even in really well-cleared samples, there's still variability in image quality across different brain regions, which is something that will have to be considered. This correlates closely along with differential lipid content in different areas of the brain. And if you do try to really push it for a complete lipid removal, you are risking the loss of scarce biomolecules. If you have something that's a low expressor, it might not be there if you really push that lipid clearing. Even in our samples that are really well cleared, we still have Rayleigh scatter. This is whenever photons hit matter. So as long as you have atoms there, you're still going to have this scatter, and this is highly dependent on the wavelength of light that you're using. So therefore, blue dyes are still pretty much off the table, and even some green dyes are questionable depending on how much target you have there and whether you're working in a really well-cleared or poorly cleared area of the brain.

On more of a kind of positive note, we think the resolution that's achievable with our current techniques is probably not going to be limited by the tissue clearing but more limited by the amount of data you want to store. So we do feel, and, I mean, we saw in presentations yesterday that you can get really high-resolution imaging with the techniques that we have now. And another positive that came out of this too is there was consensus that cost and the amount of time that it takes to clear samples are not concerns or limitations to using these techniques where they are now. And, I mean, yeah, it's not cheap, but relative to the cost of labeling or microscope access and data storage, the cost of the clearing reagents isn't that excessive. So what are some things that we can look to in the future? So we definitely need better validation of the clearing protocols. So right now the way most of these techniques are validated is by placing the sample over some typed text and seeing if you can read it underneath. So we spent a little bit of time talking about what could be done here. So we talked about measuring point spread functions from fluorescent beads inserted throughout or below the tissue, even just looking at the degree of aberration between identical structures that are at the top of the sample versus the bottom and the middle, so looking at nuclei, dendrites, things like that. And it's also important that this validation be done across multiple brain regions so that we look at those areas that have high lipid content versus areas that have a little bit less. We also need to come up with some consensus on how we're going to do these five-millimeter slabs. Are we doing coronal sections? Sagittal sections? Are we doing a very thin section along with a thick section? And we'll talk about that a little bit more on the labeling side.

And then from some of the experts in the field, they felt that there may be possibilities for pushing beyond this five millimeter in the future, primarily through improving water removal and the solvent-based methods and maybe with the use of enzymes. And one other thing that everyone felt was we still need to improve the retention of mRNA in clear tissue. So that was the clear tissue side of it. Switching over to the labeling side. Here, we felt that antibodies and nanobodies still remain the reagent of choice for labeling protein targets in human brain tissue. However, the cost of this is completely prohibitive. Some back-of-the-envelope calculation said you'd need upwards of 50 milligrams of antibody per target, per brain, which is well over $200,000 to buy through a commercial supplier. So we're definitely going to need to come up with open-source methods or publicly funded sources for that. The sequential labeling protocols where you just do sort of four labels at a dye-- sorry, four labels at a time, strip that out, re-probe, these are severely limited by the amount of time it takes to do that in a really large sample on the amount of data that you're going to create. Barcoding methods are obviously an alternative to that that may allow us to scale closer to exponentially with the amount of bits that we have in our barcode. That would be great. It would decrease our imaging time and reduce the amount of imaging data overall. However, for these barcodes to work, they require sparse labeling and high-resolution imaging. And at the moment, that's going to be really difficult to do for protein labeling. mRNA is obviously much sparser and a little more amenable to this. But still, even an mRNA is going to be difficult to do this in thick samples. So one idea that came out of that is, it might be better to perform barcoded imaging at a high plex on a very thin section. Let's sort of take in above or below one of these five-millimeter slabs and map that back onto a low-resolution map of the entire brain.

So some of our promising and technologies and needs for future developments in the labeling field was-- like I said, there's a clear need for not-for-profit antibody sources. And luckily we had some representatives of those organizations in the group, and they said they definitely need some clear direction and coordination to inform them which targets are the targets that need to be prioritized when they're making these reagents. Monoclonal production is definitely essential. Polyclonals introduce too much variability and just cannot scale. And one other thing that was talked about was the possibility of just using minimally processed supernatants from the monoclonal antibody production. Of course, this needs to be balanced with the possibility of some contaminants coming out of that supernatant. Another thing that was mentioned was fluidic chambers that might be able to reduce the amount of antibody that's required for standing. And, of course, if we go with this method of these very thin sections where we're doing high-resolution maps with a large number of targets and mapping those onto lower-resolution whole-grain images, we need methods to merge that data and figure out how to do that SPAR sampling. So that was—

ERIN GRAY: You have about two minutes left.

DOUGLAS RICHARDSON: Yep. That was all I had.

ERIN GRAY: Great timing. So there is some discussion in the chat regarding the slab thickness and kind of what the biobanks are doing and different kind of slab slices that are being generated now. It sounds like we're in a range of four to eight millimeters, it looks like. Does anyone want to comment on that?

DIRK KEENE: I'll just comment real quick. For the current BICAN grants, remember, these are almost all frozen-tissue applications. The need to have thin slices to be able to have the anatomy, because we're actually cutting pieces out for a lot of these assays, was noted to be really important. And so, we've all gone to a five-millimeter-or-less sampling protocol, which has required some real innovative approaches to trying to slice fresh brain that we can talk about at some point. The NeuroBioBank and the University of Washington, UC Irvine are all sort of working toward this. It'll be different for potentially the next set of grants, right, with the clear brains and stuff like that. And so I think we just need to always be cognizant of what the uses are for and how much we can have as much overlap as possible in these brains so that we can use them as widely as possible.

ERIN GRAY: Great. Thank you. We may have time for just one more question. There's some talk in the chat about antibody and reducing cost. If someone would like to ask the question or start a brief conversation about that?

ELIZABETH HILLMAN: I'll just add to something that I mentioned at the end to the broader group. At Yong’s suggestion this antibody task force is being put together, which was pitched at the multiconsulting pitch meeting a couple of days ago. So it really did bring to light, what Doug mentioned, that better actual-- telling people what you are using, what's working for you, considering telling the suppliers actually listing the cost and the scalability of specific antibodies, just gathering a lot more information and coming up with sort of consensus things that should be being developed would be really helpful. So if anyone's interested in sort of joining that task force, I'm happy to connect you in if you contact me or Ignacio and Emelina Phan are kind of heading it up, so that's all I had to say.

ERIN GRAY: Okay. Thank you. And thank you for that nice report back as well.

ELIZABETH HILLMAN: There's a hand raised.

JONATHAN SILVERSTEIN: This is John.

ERIN GRAY: Do you have a quick question?

JONATHAN SILVERSTEIN: I just want to make a comment on the process of getting all those antibodies. The experience in some other consortia has been, for other types of tissue which I think would also apply to brain, is that the antibody functionality and its usability in each individual tissue for each individual use case seems to go beyond what the companies and the folks that develop the antibodies are willing to do. And so there's a whole process called OMAP or O-M-A-P that one of the NIH folks and a bunch of folks from other consortium have been working on that allow a process by which validating the antibody for a purpose in tissue and then providing a panel that gets all of the major features that you're looking for that's sort of a base panel for whatever type of tissue you're using. So it sounds like the effort is a nearly identical type of effort for brain, and it would be probably very useful to learn and align from those experiences about how to go about that, how to do that validation experiments, these kinds of things that goes beyond an antibody repository but by tissue, by assay antibody repository.

ERIN GRAY: Great. Thank you. Sounds like something for further discussion. I would like to move on to our next presentation from Doctors Jason Stein and Patrick Hof who moderated the session on brain histology and cytoarchitecture, so morphology and other anatomical phenotypes. I can see your slides, so please go ahead.

JASON STEIN: Okay. Great. So our session tried to address two basically overlapping questions with the previous sessions, but hopefully has a somewhat different take than the previous sessions do. So want to thank our panelists for a great discussion. So just as some motivation-- sorry, sorry, the two questions that we tried to address were what samples should we image and then what should we label within those samples? So very generalized questions. As some motivation for this, there is great variability in brain structure and cell types present across the entirety of lifespan, as you can just see visually across this great review from Nenad Sestan’s lab. There is also a great deal of interindividual variability in human brain structure even within a given age, so when measuring brain structure at a macro scale level using MRI, cortical surface area and cortical thickness can be measured, and population variability and hundreds of thousands of MRIs at this point have been assessed. And you can see that there's a really large degree of population variability from the 2.5th to the 97.5th percentile here from cortical surface area, meaning if there's changes from something like, for a cortical surface area, 16-inch pizza pie to a 21-inch pizza pie. So our question initially, our first question, was what intact brain samples should we acquire-- sorry, should we image and how do we acquire them. So given limited resources for initial atlases, should we, one, focus on specific ages and make a diverse atlas, so these would be to study population variability or differences across sects. Another option would be to focus on multiple ages with limited donors from each, so basically making a temporal atlas instead.

An even different possibility would be instead of imaging entire brains, would be to focus on specific regions of the brain. So there's some caveats to that. If we were going to do cortex, it kind of makes sense to do the full brain anyways. But if we're going to do subcortical structures, the subcortical markers may be different than the cortical markers, so using a cortical antibody panel would not be generalizable, so kind of motivating this study for focusing on specific regions of the brain. And then a final question is if we start with an initial atlas, what dimensions would we want to expand these atlases in the future? Okay. So the discussion on what intact brain sample should we image and how do we acquire them focused a lot on studying population variability at an older age because this may be the most feasible, as these brains are more straightforward to acquire currently. There's also the possibility, and this was talked about in the first talk, of acquiring a lot of clinical metadata, including previous MRI scans that may be obtainable through previous medical records. Of course, this study design has limitations, and one of those limitations is that there will be changes in brain structure due to neurodegeneration that may need to be classified by neuropathologists and may not be representative of the sort of normal developments of the human brain. Another limitation is, of course, that if we're studying only older brains that we're excluding development stages that may miss important structural phenotypes that aren't present at these older stages. Nevertheless, I think there was a lot of interest in studying this population variability at older age. There was also interesting suggestions that were brought up about future brain collection that can tag onto existing resources. So it was brought up that future brain collection may be perspective and build from existing longitudinal MRI and genetics resources.

Some suggestions for those would be prospective recruitment from projects like the Human Connectome projects or other adjacent Connectome projects or things like the UK Biobank which have acquired large-scale populations, some in twins, some in just general populations, where there is a lot of other information already previously acquired, including genetics. Of course, this type of future brain collection would not be possible within the confines of the next five years for this particular BICAN grant, but they may be feasible for long-term planning. And kind of a conclusion was that we need an estimate of brains existing in brain banks that could be used for this projects so we know how much perspective recruitment is required to complete all of this work proposed as part of BICAN. Okay. The next question is what to label in the brain, and there's many options, and there's many different features that can be labeled in the brain. Some of the simplest are labeling nuclei and nuclear features. Labeling whole cells also possible, although complicated and a lot of overlapping data. And then, of course, labeling synapses as well. So many different features, not just these here. So the question posed to the panel is what should we label: cell types, axons, synapses? And then even within those, should we label specific cell types, like down the hierarchy of cell types or looking at cardinal cell types or classes? And then should we work on full morphology or more simply nuclear labels, which may be easier to separate? And another question posed to the group was, do a few brains with many labels-- so should we complete a few brains with many labels and then, subsequently, many brains with few labels for cost reasons? So these are the two questions posed to the group, and here's what some of that discussion was.

So although some suggested that we should label everything, so all cell types and all features, I think a general consensus of the group was that we have to prioritize the first features to image. A great suggestion was that we start simple and build on complexity by using general sub-classes-- sorry, sub-class nuclear markers that have already well-validated antibodies, so starting with this more simplistic approach and then building from there. There was a nice suggestion that using vasculature for fiducials used to align across modalities would be very useful and that right now little is know about the architecture of cerebral microvasculature, so this would actually be interesting both technically and for new biological understanding as well. And then subsequently, future refinements of cell types would be accomplished by targeting and testing new antibodies based on single-cell RNA seeker spatial transcriptomics predicted differential gene expression. So we start with well-validated antibodies then move on to new antibodies in the future. It would also be useful to have multiple projects working on different aspects of features. So some groups working on axons, some groups working on cell types, some groups working on synapses. Simultaneously get in different brains, of course. So there was also some suggestion that we should limit labeling to reversible and non-destructive methods when this is possible. Though, of course, there are a lot of constraints to this because tissue processing, for example freezing versus fixing, can sort of restrict what downstream outcomes you can do. And also, these sort of reversible methods may damage the tissue. So they may actually not allow restaining. So the idea here would be that if we acquire additional brain samples, they could be used and set aside for more labeling and testing these new different labeling approaches.

And so we also talked about how nuclear markers are the easiest to label and quantify, but some morphology is still predictive of cell type and can be a useful feature to predict cell types. So one thought here was that electron microscopy in small volumes could be used to inform morphological features, in order to label, using fluorescence in the whole brains. So EM's in small brains predict features for whole brains. Okay. Finally, the question about whether we should have few brains with many labels and, subsequently, many brains with few labels had general consensus that this was a valuable approach, that this would be a good idea to do. So training prediction tools to predict new cell types without acquiring many channels in every brain would be a very valuable approach. So using the combination of multiple channels, modalities like spatial transcriptomics, autofluorescence, or nuclear morphology, the idea would be to train algorithms to predict cell types from these, so without a specific marker for that cell type being imaged in every brain. Another idea was that training generative models to generate different modalities from one modality would be very valuable, if possible, to label multiple modalities in the same section. So for example, translating fluorescence microscopy to H&E would add value to pathologists. We had a nice small presentation about this FIBI method which could be one way of doing that. Translating fluorescence to Nissl would add value to anatomists, and translating fluorescence to MRI would add value to both neurologists and anatomists. Yeah. So this was the general discussion, and I thought it was really valuable. I'll stop sharing here.

ERIN GRAY: Thank you for that presentation. It's very nice. Looking in the chat, I noticed that it's mentioned that perspective brain collections take years to yield a meaningful cohort of brains for study, unless this study is restricted to older age groups, so greater than 65 years of age. Any comments on that?

PATRICK HOF: I think this is a very good point, and it is absolutely clear that it's going to take time to accrue a reasonable size of a sample that will satisfy the need of all of the projects that will be doing this sort of work. Nonetheless, it is absolutely essential that it be implemented as soon as possible. Definitely, older population will-- we know that we provide the largest number of brains. And this goes back to issues of neuropathological characterization of the specimen, which we heard earlier this morning. It needs to be done also for the younger age. And in fact, I did receive this morning some additional comments from our discussion yesterday, regarding including young adult brain. And that should be considered because this is from someone who works across multiple species and made the point that there is a very large amount of data that exists from young adult mice and that the same could be easy be achieved in non-human primates. So this is something to consider. This is evidently a lot more difficult to achieve, but efforts to get younger brains should absolutely be implemented ideally for a global project of that sort.

ERIN GRAY: Okay. Thank you. We are at time.