Skip to main content

Transforming the understanding
and treatment of mental illnesses.

2022 High Throughput Imaging Characterization of Brain Cell Types & Connectivity-Day 1, Part 2

Transcript

NED TALLEY: So the lighting talks everybody has got to stay on time. I'm going to be jumping in when your time is up. And if you don't stop I'm going to be talking to Yong Yao about your prospects for future funding. So keep that in mind. The first presentation is from Allan Johnson from the Duke Center for In Vivo Microscopy. Dr. Johnson, are you on?

ALLAN JOHNSON: Thank you. Can you hear me?

NED TALLEY: Yes, we can hear you. Please proceed.

ALLAN JOHNSON: Well, thank you for the opportunity to speak today. I'll go quickly. We are going to talk about magnetic resonance microscopy and merging it. Go ahead. Next slide. So the key to this is that we-- I guess some of my text has been whited out. But what you are looking at here is an MR image at 15 microns. This is about a million and half times the spatial resolutions of the human connectome, and we have added to that light-sheet image at 1.8 micron. You will see on the right the NeuN stain from-- we're going ahead too fast here. On the right the NeuN stain in hippocampus we're seeing dentate gyrus where we're resolving about a layer of cells. We're seeing that also in the diffusion. Go ahead. Next. And we could do that across multiple stains.

What you are looking at on the left-hand side is light sheet from LifeCanvas Technologies of the same specimen before we get it-- before we register it. So the red is the light sheet before registration. The green is after registration. You can see that there is considerable distortion. That is common for light sheet I'm told, and we have developed methods by which we map the light sheet back into the MR, where it is more faithfully reproduced because the MR is acquired with the tissue in the skull. You can see on the right that we have a myelin basic protein matched with the radial diffusivity. The diffusion tensor images were acquired at 15 microns but then extended with super-resolution down to 5 microns. Go ahead, please.

And you'll see the diffusion tensor image in two mouse models. On the left is a standard C57. On the right is the BTBR, a model for autism. You can see a considerably disrupted tractography, particularly projections in the cortex. You'll see at the top, the motor cortex is almost missing from the BTBR. Go ahead, please. Next. And this is from the atlas that we are just in the throes of publishing. It is the combined of all of these 15 micron and light sheet. This is again a light sheet specimen from LifeCanvas Technologies. The specimen, the brain in question, expressed yellow fluorescent protein with Thy-1, and you're seeing superimposed upon that now tractography at five micron spatial resolution. And if we move into the hippocampus, you can see that the tractography is even picking up some of the projections of apical dendrites. Go ahead, please.

And we have done microscopy in human brain as well. You'll see on the right the human brain stem, which was scanned at 50 microns for gradient echo of 200 microns for the diffusion tensor. And we resolve the dentato-rubro-thalamic tract, which the surgeons are using here at Duke for mapping the placement of the electrodes. Since the brainstem is relatively conserved, we can match the brainstem from our atlas into the patient's study. And the surgeons are finding it to be a very helpful map to help them place the electrodes. Next, please. And finally, I will end with a little bit of eye candy. This is diffusion tractography of the whole mouse brain at five microns. I hope that I've finished in the time frame and that I'm now guaranteed funding for the rest of my career. Is that right?

NED TALLEY: Well, it depends on how long your career is intended to be. But yes. You did end very much on time, and we appreciate it because we started a little bit late. Thank you so much, Dr. Johnson. I think up next is Dr. Berger. And yep. Daniel Berger. And are you on the video and audio?

DANIEL BERGER: I'm here. Can you hear me?

NED TALLEY: Yes, we can hear you. Please proceed.

DANIEL BERGER: All right. So I would like to introduce you to a study that we did recently, which is a collaboration between our lab and Google. And it's interesting for two reasons. Next slide, please. So this workshop discusses acquiring petabytes of imaging data of human brain and make the data accessible online for collaborative research. And our labs at Harvard and Google are already working with a very large, more than a petabyte of microscopy data set of human cortex which is hosted and browseable online. However, this is 3D electron microscopy at a resolution of 4 x 4 x 33 nanometers, 1 cubic millimeter in volume. So we're going in resolution up a factor of a million approximately and in size down a factor of a million, so it's a very small region of human cortex. But in terms of data size, it's similar.

Neurites and synapses in the data have been segmented automatically. Cells can be reconstructed as 3D models, classified morphologically and synaptic connections between them can be identified. And one reason why I want to show this is because this is an example for a very large data set of human brain that is hosted online. And secondly, mining this data set could be valuable addition to the project for relating cell morphology to synaptic connectivity. So we cannot really include genetic or recording data in EM data sets, at least not easily. But we can see the morphology of whole cells at this scale or close to whole cells, and we can also see the synaptic connectivity. And I would like to show a few examples of this. Next slide, please.

So just to tell you briefly, this is small region of cortex from a 45-year-old woman who was undergoing operation for epilepsy, but we think it's healthy cortex. Next slide, please. This shows you the data sets. So a set of cubic millimeter in volume, but it's not a millimeter by a millimeter by a millimeter. But it's thinner and wider so that we can cover all layers of cortex. And this whole data set was processed by Google and segmented. Next slide, please. So we have an automatic segmentation of all the neurites in the data set. There are some errors in the automatic segmentation, so it requires proofreading, but we do have automatic predictions of synapses as well. And I went in and manually identified all the cell bodies, so we can really see all cell bodies in the data set at this resolution, then classify them based on morphology. So here you can see all the neuron glia on the upper left, then the spiny neurons, after classification, the smooth neurons, which I think are interneurons. And we can also track trace the myelin, so we can get an electron microscopy-based true image of the directions of myelinated axons. Next slide, please.

So if you do complete census, based on this classification, what we found is that there's about a two-thirds of the cell bodies which are glia. And what surprised us was that most of these are oligodendrocytes. So there are lots of tiny oligodendrocytes in the human brain. And it's a different composition of neurons versus glia than in mouse brain. Next slide, please. Here's an example of what scale and resolution we get with this technique. So you can see one pyramidal cell that was proofread, so we do have all the pieces stitched together that this neuron has within the volume. And you can see in the small insert that there is a high enough resolution to actually see synapses and spines and the connections they make. So we can now trace out synaptic connectivity of cells if we can identify them by morphology. Next slide, please.

Here's one such example. So for chandelier cells. It's relatively easy to identify them in our data set because of their target specificity to axon initial segments. And we can then look at, for example in this case, a single axon initial segment and then find all the local chandelier cells that make connections onto it. So you can see by arrows where these cell bodies are. So there are seven local chandelier cells that have axons that synapse onto the axon initial segment of this one white pyramidal cell. And then, five further chandelier axon branches that come in from outside because our volume's not that large here. And this same other axon receives six other inputs, which I think are basket cell axons, that don't make cartridges. So that's another way to distinguish these two cell types. So we can see connectivity at this level, synaptic connectivity, and be sure about what is connected to what. Next slide, please.

So how do we host this and serve this out? There is a tool called Neuroglancer, which is being developed by Google and has been adapted by a number of labs, and we are hosting our large data set, which is over a petabyte in size, on Google Cloud infrastructure. And people can just use their browser to view it and to select certain objects and to view the electron microscopy, the segmentation in different versions, the skeletonization, which we also have, of the processes and other metadata using this interface. There's also a Python API that you can use to do more automated analysis. And currently, we're working on implementing a collaborative proofreading environment that will also be applicable to this data set. It's already being used for the FlyWire project if you are interested. Yeah. So this is public, if you search for the title you'll find it. Thank you.

NED TALLEY: Okay. Thanks so much, Dr. Berger. There are questions in the chat for you as well. It's terrific work and very exciting again. Next up is David Kleinfeld. David, are you on?

DAVID KLEINFELD: I believe so.

NED TALLEY: Okay. Please, take it away.

DAVID KLEINFELD: Okay. So I wanted to-- let me thank you and Laura for this opportunity. And I wanted to bring up two issues that may be relevant to classification of atlases, and the building of atlases, going forward. So the first issue, and I think this is actually brought up in the chat by Rob Williams and Alison Barth, is how do you deal with brains that are different from each other? Which is to say, what's the best way to go from brain-to-brain alignment? And the goal would be to have individual brains as aligned with each other as best as possible, so that data sets - be they projections, cell types by genetics or antibodies, or track tracing, whatever - have the best overlaps so that you could combine individual data sets. I think this is a generic problem. Okay. Next slide, please.

Okay. So let me sort of lay out the problem. I mean, if you look at the cortical layers, of course, it's very clear to your eye what the organization is. But if you move further away from cortex towards the midbrain and particularly the brainstem, which is where my colleagues and I have mostly been working, what you find is that you're in the middle of, quote, the reticular formation, which is sort of like the joke about New England, you can't get there from here. Is that the borders between functionally defined regions are very poor. So defining atlases based on sort of averaging and grayscale, which, truth be told, is what's going on mostly with Allen common coordinate framework, which works great in the cortex but becomes somewhat diffuse by the time we get to brainstem. You find that if you looked at the fine detail of cells, their shape, their orientation, their size, by classical anatomists like Braitenberg called texture, you can actually distinguish very nicely, but if you look at more of a gray region, this becomes problematic, at least as one moves away from forebrain towards brainstem.

So what we're proposing is actually to use this detail to one's advantage, use the high-frequency information to get the best estimate of borders, and then average based on that information. So next slide, please. So this was a project, and a preliminary version was already published. So I'll just sort of sketch through it quickly. It has two major ingredients. One is you need to pull together experts. In our case, Harvey Karten, the great anatomist, was our expert. Lauren McElvain is now at USC, also provided tremendous input. And this allowed us to identify individual structures across many brains, form an atlas, but form really a probabilistic atlas of structures. So there were consensus regions, and then there were regions that vary between annotators. And from that, we formed this consensus atlas. In this case, again, we're focused on the brainstem. Next slide, please.

Okay. How you implement the automation is an issue that the details aren't terribly important. But the point is, based on our reference atlas from human annotators, we can go ahead and pose sort of a consensus region on sections, and then automatically look at regions outside the consensus sections, which is the strip between the green and the red, and look for consensus regions inside, which is inside these greens, do some amounts of mathematics. This was based on a concept from my colleague, Yoav Freund. And then from this, next slide, please, what we could do then, is we have many regions, okay, we can make a decision as to what regions of the brain this represents. The false positives are very high, but we have many regions. So therefore, we can get a fairly precise fit of this reference atlas onto the brain in a way that respects the brain-to-brain variability. And we would claim you can get down to what seems to be sort of 90 to 100 micron, sort of standard deviation, based on biological variability. And as just one simple example. This is worked on by Lauren McElvain. We use transport for muscles into the vibrissa, whisker muscles, intrinsic muscles, as well as into movements of the jaw and motor five. And then we could actually look at the overlap of this labeling as an example of how you can put data sets together.

So the big lesson here is to get back to this point of what is the best way to combine individual brains in a common atlas. And of course, once one does this, one could start to feedback and improve the reference atlas. Okay. Next slide. Okay. So another point I wanted to bring up, there was a very practical consideration that came up in our work, and it seems to come up using human brain where you have very expensive chemicals, and you want to minimize, absolutely minimize the staining volume. And we can particularly see this with antibodies in the human brain. In our case, it was actually just using fluorescent NeuroTrace to get cytological staining. So next slide, please. So the issue is that any atlas building requires counterstaining. You really have to see, have to get the underlying cytology to get boundaries. Many of these molecules are very expensive. So we want to have a compact and preferably automated staining system that allows us to recover dye. Next slide, please. Okay.

So Alex Groisman in my department, who is a phenomenal instrumentalist, built the essence of this. I realize this looks like a 1950s washing machine, but in a very serious way. This is a very dense slide container in which we could have a high flow rate of dye, which is recoverable. And we found out that this uses about an order of magnitude less solution than in commercially available staining systems. It is maybe 60% less efficient than if you just sit there with a dropper system, but this takes about 10 times more time if you hire undergraduates to do this. It allows recovery of the stain. And I think when we applied this using NeuroTrace in the mouse brain. Yeah.

NED TALLEY: We might need to skip the details on this. Your time is running past. So do you want to wrap it up in just a couple of seconds?

DAVID KLEINFELD: Yeah. Okay. So you could get good recovery of dye and automation with uniform staining. Next slide. And thank you.

NED TALLEY: All right. Thank you. And I'm sorry to cut you off. That was a terrific talk, really interesting work. Next up, we have Dawen Cai from University of Michigan. Dawen, are you on?

DAWEN CAI: Yes. Can you hear me?

NED TALLEY: Yes. We can hear you. Please go ahead.

DAWEN CAI: Yeah. Thank you very much. So I think today I'm the first to talk about tissue clearing and also potential method for staining. And I would say, I'll go to this first slide if we can move on. Next slide. Sorry. All right. Yeah, sorry, there's a lot of animation. Yeah. Just go through all of it. Okay. So basically, here, try to introduce a hydrogel-based workflow that we developed in the collaboration with ED BOYDEN in MIT. In general, PFA fixed samples, either let it be a thick section or a whole brain that we need to functionalize the proteins for our immunostaining purpose. And then the protein being crosslinked to this hydrogel matrix. And then after homogenization and expansion of choice, whether you want it to expand it or not, then you can clear the brain. And this can go through multi rounds of antibody staining, imaging, and then stripping. And then you can use your favorite multiplex methodology of staining here for detection. Next slide, please.

So here, basically, list a little bit modification, a different modification that we do based on the original protein retention expansion microscopy recipes. The emphasis here is to try to make a stable structure of the gel. And then also with reasonable expansion capability to three to four folds ish. And then, in the original paper published up to 2020, our homogenization method or clearing method was used SDS in a high concentration and then at pretty high temperature, 70 degree. And then the expansion is done in-- wash off the SDS after the clearing, basically, strip off lots of lipids from the tissue, and then the serial exchanges in PBS and diluted out all the way to water for clearing out the SDS.

Next action, please. So the challenge is that in that time, we have done this in the passive incubation fashion. Obviously, there are other methods, such as using electrophoresis-aided method, to speed up the SDS clearing. However, it's still relatively slow and very difficult to remove. I think more importantly is that when we try to scale this up to human samples or whole brains or other very large animals, whether that is still a sustainable method is still not very clear. Next, please. So because of that reason--

NED TALLEY: Two minutes. Okay?

DAWEN CAI: Yeah. Thank you.

NED TALLEY: Great.

DAWEN CAI: Okay. I'll speed up very quick. So because of that reason, we tried to do an alternative strategy to optimize some of the enzymes that we use for the homogenization process. And here is shown a cocktail of enzyme that we developed. We call it EPIC-clear. And basically, you can see with these thicker sections, and in an hour or even shorter time, depends on what cocktail that we use, you can clear it very, very transparent. Next, please. And then this it's also preserve GFP or YFP, for instance, very well, as well as we can stain neurites, processes, blood vessels, and nucleus showing here. The next slide, please.

This can be done in the whole mouse brain. And compare that also to the passive SDS clearing. I can see that after three days passive incubation, that mouse brain is almost completely homogeneous. And next, please. And then this same brain that we can also stain antibody. Here we stain the GFP with this DAT-Cre line here, stain for the other dopaminergic neuron with GFP, and stain with Cy3. And this whole staining process, again, by passive incubation only takes two to five days. It depends on what was the staining level they want to want to go to. If we zoom in the closeup, I can see the fibers and cell bodies are very nicely stained in the very middle of the brain. Now next, please. The same method can be applied to clear larger volumes or more challenging tissues than the brain. Here are PFA-fixed kidneys, liver, skin, and also –

NED TALLEY: You have 30 seconds.

DAWEN CAI: Yeah and then we'll be done. So this is the last slide I want to show and then-- we are in the process of optimizing the protocol for human tissues especially Formalin-fixed tissue, over fixed tissues, which we got some preliminary results but for the interest of time I'm not able to show it today. Thank you.

NED TALLEY: All right, thanks Dawen. Next up I believe we have Ed Boyden. Dr. Boyden are you on?

ED BOYDEN: I am, let me start my stopwatch here. Yeah, so we've already heard, thanks to Dawen, a little bit about the expansion method, so just as a tiny bit of review, then I'll skip to the new stuff. We take a piece of brain tissue, we process it through applying anchoring molecules to keep biomolecules, we form a sodium polyacrylate, highly charged soluble hydrogel throughout the specimen very dense and evenly. We soften the specimen, and then adding water, we swell the tissue to bring nanoscaled features into the realm of imaging with ordinary diffraction-limited microscopes. Next slide, please. So one thing, of course, that we're very excited about is using this to facilitate mapping neuromorphology. This is an example from an older paper that we published, and Dawen has really shown some very interesting improvements as he described earlier. But by expressing combinations of proteins such as Brainbow and other examples stochastically in cells-- so you protein barcode them so to speak. And then that plus expansion is very helpful for looking at things like fine structures of axons, spines, dendrites, and so forth. Next slide, please.

Okay, so let’s get into some newer stuff. One thing that we just published a couple of months ago-- this is quite a dense slide so I apologize. It probably doesn't translate so well to Zoom. But one thing that we're more and more appreciating is that when you expand proteins away from each other, we make more room for staining. So the top row, Panels B. C, and D, they're sort of crowded. The yellow staining, and there's not much yellow staining to be seen, is pre-expansion staining against a calcium channel RIM1/2, and PSD95 respectively. The purple is post-expansion staining. The bottom rows show examples, I guess I can draw on the slide, where the yellow pre-expansion staining and the pink post-expansion staining are more similar. And so that's an example where these molecules were not so crowded. So by exposing proteins for staining, we can make proteins more visible. Next slide, please.

Okay, so again showing some more recent things, and you've heard a bit about multiplexing already, but I forgot to mention that the previous slide we're expanding by about 20-fold. So this is a 20-folded linear extent, so a 300 nanometer-ish lens might now have the resolution for approximately 300 nanometers divided by about 20. Let's say 50 to 25 nanometers so quite good. But one of the interesting things about this of course is that by expanding, it could facilitate different kinds of multiplexed antibody staining. So we have a collaboration with PENG YIN's group on applying DNA barcoded antibodies. Where we can read them out using their DNA readout techniques like Immuno-SABER. But also, we're working on ways to do many rounds of staining and imaging.

And so these are just examples here where we're trying to do over a dozen different antibody stains by expanding proteins away from each other and then washing in fluorescent antibodies, taking a picture, washing them out. And this can be done over many times. So on the left, we have sort of the zoomed-out images of pieces of mouse cortex, and then we're zooming in from left to right until we get down to effectively individual synapses. And then we can look at many, many proteins at once and how they're organized with respect to each other within a synapse. And hopefully, this could help with classification of synapse types in the context of morphology. It’s not shown here, but of course one of the things we are doing is trying to combine this with the expression of the self-filling proteins as well. Next slide please.

Okay, so I don't know if this is going to show as well over Zoom, but if I zoom in, no pun intended, I guess I can start to see it. The one thing we're also starting to do is trying to develop lipid stains that can be compatible with expansion. And so by applying lipid stains that will interpolate into the membrane forming a hydrogel to stabilize those lipid stains, then we can apply antibodies, so in the left we're actually using an anti-GFP antibody to stain YFP inside a Thy1-YFP mouse cortical neuron. The right-hand side, it's sort of hard to see over Zoom, but we're using anti-SV2A, shown there painted, to begin try to label synapses in the context of lipid stains. But our hope is that this could also help fill in some of the fine details of the architecture of axons and dendrites. And this could be combined, of course, with the self-filling proteins as well as the antibodies against specific synaptic and other proteins. Next slide please.

We have a big culture of trying to teach how to use these tools. Over, I think, about 400, 450 experimental papers and pre-prints doing some kind of expansion have come out. Pre-COVID, we hosted a lot of people to learn how to do this with hands-on work. Middle of 2020 we posted a photographic tutorial on how to do expansion. We're very excited about working with many people to help deploy expansion and optimize it for different purposes. And I think I'm done.

NED TALLEY: Awesome. Thank you so much Dr. Boyden. I believe next up we have Elizabeth Hillman.

ELIZABETH HILLMAN: I can't seem to turn on my video. I've been blocked apparently by the host, so.

NED TALLEY: Well, I think you're going to put you on and put you on the spotlight when you do, so please put--

ELIZABETH HILLMAN: All right. Let's go. Okay, so I'm speaking on behalf of my team listed here. As you can tell, we are all New Yorkers, and so if we go to the next slide please. So just keep clicking. So we proposed this rather crazy idea. We've had this funding from the BICAN now for a little bit over a year, and our plan as a group was to image every single cell in the whole human brain using a combination of first MRI, then tissue-clearing, immunostaining, and high-speed light-sheet microscopy. And I will say that we chose to do the whole human brain because we actually find this individual variation to be extraordinarily interesting, and I was really inspired by Katrin's talk earlier on that pointed to that. But we've basically spent now the better part of three years thinking about all aspects of this pipeline, and to do something on this scale requires a great deal of compromising and trade-offs, and so I think the workshop today is going to be extremely helpful to get feedback from everybody on the way that we're approaching this and the decisions that we've had to make. Next slide please.

So in terms of our tissue clearing and labeling, we are using this HuB.Clear method which is based on the iDISCO technique. Just keep clicking, please Laura. And so this is brains that would be obtained by rapid autopsy which allows us to get consent. Optimal profusion turns out to be really helpful getting MRI in all of the brains. This is a very parallelizable, scalable, and immune histochemistry compatible technique, so we would have to do this on the scale of these whole human brains and possibly tens of, or even 100 whole human brains. We want to do a five-millimeter-thick slab. This allows us to get past a lot of issues with distortion and stitching and trying to sort of piece pieces together, and we also feel very strongly that we want to avoid having to strip antibodies at all costs. So we've come up with a novel antibody selection and coding approach that we hope is going to allow us to achieve all of this by using a nine spectral channeled image simultaneously. This also means that our brain stems will be kind of archivable, and so all the imaging can be linked back to physical samples that can be examined again at a later date. Next slide, please.

Our imaging is, of course, based on light-sheet microscopy which I think is the consensus that this is the best way to capture 3D images in very high throughput, but these geometries of conventional two objective systems really limit the scale at which you can image. Again keep clicking, please. And so the HOLiS technique is simply a single objective light-sheet approach which allows you to image down to the full thickness of that five-millimeter slab. And essentially we can collect these big contiguous sections where we just basically move this slab around. Again if you just click it might animate. Maybe not. Yeah. Okay. So this really saves us a great deal of energy in terms of not needing to stitch images together. It's incredibly efficient in terms of integration time and signal to noise, and just click. Sorry. Here's a picture of the system. Here's some examples of our hyperspectral data info. Yeah, we'll be using this very high precision stage. Again, thinking downstream to really minimize any aspects to needing to stitch and register which are going to be computationally very, very expensive but also while optimizing our ability to do very hyperspectral imaging with maximum speed. Okay. Click. Next slide, please.

And so in terms of analysis, we've done these calculations about the size of this data, and so if you look this is just an example. In the best-case scenario here we think we could even get down to approximately 7.8 days here for imaging which would be incredible. It would really commoditize this approach. But as you can also see we're dealing with something on the order of three petabytes or more of data per brain. So we have the plan to do a two-tier analysis approach. The first tier is going to generate very accessible data that will be based on finding the nuclei and then extracting the proteomic signature for each nucleus which allows us to get into this coding scheme almost sort of similar to the way that we think of and look at and analyze transcriptomic data. So that data would be a point cloud. It would allow us to share data sets on the order of say 100 gigabytes instead of 3 petabytes and would allow instant quantitative analysis. So this is a very important part of our pipeline.

Then there would obviously be access to the raw data which would allow much more specialized tier-two analysis including one doorbuster brain. And this would then allow secondary cell morphology analysis, visualization, manual annotation, and all of the things that we dream of. But we think that that gateway data is really, really essential to have as a very rapid output from this pipeline. So that it can be analyzed while the data's being generated.

Okay. Final slide. So yeah, as I said we actually managed to get together in person, in real life. Which was much more exciting than one might think. But I do think that it's so essential that all pieces of this pipeline communicate with each other. So going really, from end to end considering the widest possible impact of the data that we're going to generate. So engaging our stakeholders early. Ensuring really high-quality brains. Managing all aspects of speed, cost, the engineering of how we can actually do this, managing data sizes, simplifying our analysis pipelines, and then also making sure that as we develop these technologies we optimize them and hopefully allow this to be scaled up even more in the future. Thank you.

NED TALLEY: You went one second over.

ELIZABETH HILLMAN: Oh, I'm sorry. Does that mean I never get funded again?

NED TALLEY: [laughter] Great work. Okay. The next up is listed as Jayaram Chandrashekar. But I believe Adam Glaser is going to do the presentation. Is that correct?

ADAM GLASER: Yes. That's correct. Yeah.

NED TALLEY: All right. Go ahead.

ADAM GLASER: Okay. So I'm excited to talk to you today about a new light-sheet system that we've been developing at the Allen Institute for Neural Dynamics. And so to start with, I just wanted to sort of bring up what are the goals of any high-throughput imaging platform? Laura, you can animate through these ones. Yes. So in an imaging platform, I think what we're after is, for example, the ability to image tissue with micron-scale resolution, high isotropy, in that Z resolution should be as close to X and Y as possible. Minimal sectioning and tiling in terms of putting all the data back together. We want to image large volumes, and we want to do it very quickly. And so when we think about the optics that we can use to build a system, for example, a light-sheet microscope, life sciences have various objective lenses. And for example, if you wanted a lateral resolution of say, one to two microns, it's sort of fixed in terms of the working distance of that lens, how thick you can image, what the field of view is going to be how much you have to tile, and then the etendue is a metric of how much information a lens can capture. Next slide.

So what we've really done in this next system is thought about what can we do to scale things up. And one thing that we found is that we can leverage technologies, actually, from the electronics inspection industry. And so there literally what happens is, you can find these much larger lenses. And when you compare the specifications to traditional light-sheet optics, if we also wanted one- to two-micron resolution, these lenses have much longer working distances over several centimeters. In most cases, the field of view is massive, 16 to 32 millimeters in many cases. And then, finally, that etendue in terms of how much information you're capturing is much larger. Next animation. And so what's really nice is actually these lenses are designed to image electronics. They're very flat. That's good for optics. They use visible light to image things, and they now have resolutions to see biologically relevant features. Next slide.

So we've developed a new large-scale light-sheet system around these types of lenses. There's a lot of dense information on here. But essentially, we have two versions of the system. On the left, we can actually fit a four-by-six-centimeter specimen in. In our case, a whole intact, expanded brain. We can also make a sort of inverted version of the system for human and non-human primate tissue slabs, and we can image one centimeter thick. The resolution of the lens is about one micron in X, Y, and one and a half microns in Z. We have a one centimeter squared field of view, and we can image about 50 cubic centimeters per day or 50 teravoxels per day. And we can run the system at many gigabytes per second. Next slide.

And so we focused on version A, which is what I'm going to show today, but there's also other lenses that we can easily swap in to build version B, C, and D of the system. Version D, for example, might be more ideal for human brain imaging with slightly lower resolution. Next slide. So this is what the current prototype system looks like, version A. Next slide. And we've characterized the resolution. So it truly does provide fantastic resolution over the entire field of view. And you can see that using some light-sheet tricks, we can have almost isotropic resolution in Z. Next slide. So here's an example of the types of imaging experiments we've been doing. This is an entire intact, expanded brain. Next slide. So we can image that entire several centimeters cubed brain in just 12 imaging tiles, which is magnificent. Next slide.

So this is an example of the global view of that data set. Next slide. And if we zoom in on the cerebellum, next slide, and keep zooming in, you can see that with the expansion factor and the resolution of this lens, we're essentially imaging the entire mouse brain with 250-nanometer resolution. Next slide. And so this is what the 12 tiles look like. And so you can see in the overlapping regions that despite the fact that this is an expanded brain, it's very easy for us to line up the seams. Next slide. And so if we zoom in on one particular tile, and then the center one is colored cyan, magenta on the periphery. You can see that it's white in the overlapping regions, meaning that we do have the ability to stitch individual fibers together between different tiles. Next slide. And so this is a video of one of the recent brains that we imaged. Looks like the video's not playing. Laura, can you click the video, or?

NED TALLEY: I see it playing.

ADAM GLASER: Okay. And so here we're zooming in on the cerebellum. And what you can see in this particular data set as we scroll around is we will find individual fibers that have projections going back to the thalamus where the labeled cell populations are, within this particular brain. Well, I see it playing now. Yeah. So I think I'll just let it play for a second until we actually get to the fibers. So over here, you can see the projections, and we can trace the individual axons all the way back to the cell bodies. Next slide. So part of this is not just the hardware but the software as well. So we have a custom acquisition software to run the microscope that we're developing with the CZI. Next animation. We also have worked out –

NED TALLEY: I was so transfixed by your talk, I let you run over time there.

ADAM GLASER: Yes. So all to say that we have a preprocessing pipeline in place where we can actually stream the data locally, compress it with our HPC and then push it to the cloud faster than the microscope is producing data. And in conclusion, finally the next animation, Laura. Sorry, I'm way over on time. We are looking for early adopters and open dissemination of the hardware and software in 2023, so please get in touch if you're interested in discussing getting one of these systems. Thank you.

NED TALLEY: Okay. Thanks so much, Dr. Glaser, that's terrific. Next up, we have Kwanghun Chung. Kwanghun, are you on?

KWANGHUN CHUNG: Yep.

NED TALLEY: All right. Please proceed.

KWANGHUN CHUNG: Okay, great. So we've been working with many PIs within BICCN and BICAN to develop and apply technology platform for multi-scale, multi-imaging, and phenotyping of human brains. And this schematic drawing shows the process from MR imaging tissue preservation all the way to tissue processing and imaging and analysis. And also it shows what kind of multi-scale information that we can get from macroscale cytoarchitectures down to individual synapses. Next slide. So the challenge that we are trying to address first is tissue preservation which is important for pretty much all tissue imaging. And the current issue is the loss and uneven preservation of especially mRNA in sick human brain tissues and proteins. And here you see two images from the same tissue showing VGLUT1 FISH staining, and some images show nice bright signal, but some show really weak signal. So preservation of mRNA is obviously uneven. And also over time, even in PFA-fixed tissue, mRNA gets lost. Another challenge is high autofluorescence lipofuscin making signal detection and analysis challenging. Next slide, please.

So we address these challenges by developing SHIELD 2.0, a technology that enables exceptional preservation and protection of mRNAs and proteins in human brain tissues. And top shows traditional method of FISH staining, and the bottom is SHIELD 2.0, and you can clearly see that signal is much brighter. And once mRNA proteins and tissue structure is SHIELD preserved, we can store the tissue for many months, and after many months, the signal is still very bright. Next slide. And we can use photobleaching to remove lipofuscin without damaging mRNAs and protein-dense structure. So we can get equally good signal from photobleached tissues, and we can get rid of all the additional lipofuscin. Next slide. And using many FISH probes and antibodies, we can map many mRNA transcripts and proteins within the same tissue. Next slide. And using this advantage, we can cross-validate FISH probes and antibodies within the same human brain tissue. Next slide.

So once the structure and molecular information is preserved, we can extract the information at multiple scales like shown here. Next slide. So I will share one example. So we processed and imaged two tissues, one control, one AD tissue. And we first processed a four-millimeter thick tissue and mapped all cells in this tissue volume, and we mapped the NeuN-positive cells to characterize NeuN-positive cell loss in the entire intracortical tissue. Next slide. And we also did mention around immunostaining to map many different cell types and also pathogenic factors shown here. So we can study their interrelationship within the same tissue. Next slide.

And after simple image analysis, we can extract spatial information and also molecular information of all cells, and we can do interesting combinatory analysis, summarized here. Next slide. And after that, if we want, we can do higher-resolution imaging with the same tissue to characterize subcellular architectures shown here. Next slide. And this is some examples of our morphological analysis. We were able to classify subtypes of microglia and GFAP based on their morphology and characterize their functional state. Next slide. And we can also expand these tissues and do super-resolution imaging to characterize many interesting nanoscopic or subcellular features such as demyelination in relation to pathogenic factor distribution and also a loss of external projections and synaptic density changes associated with these pathogenic factors. Next slide. Lastly, we have been trying to develop technologies that could enable connectivity mapping and single fiber resolution in human brain tissues by imaging intact fibers within a single fiber resolution and also connecting computation and connecting fibers between slabs, and I hope that we have another chance to discuss this further in the near future. That's it. Thank you.

NED TALLEY: Okay. Thanks so much, Dr. Chung. Next up, we have Hong Yin. Dr. Yin. Do we have the right slides here? I've skipped. Oh, it's Partha Mitra. I'm sorry. My mistake. Partha, are you on?

PARTHA MITRA: Yes. Can you hear me?

NED TALLEY: Yes. We can hear you. Please proceed.

PARTHA MITRA: Thanks, Yong, Laura, and Ed. I'd like to share with you information about a project that I'm working on together with colleagues from NYU, Jiangyang Zhang, Els Fieremans, Dmitry Novikov, Daniel Tward from UCLA on brain registration, David Nauen, he's at Johns Hopkins. And also among our advisors are-- want to call out Katrin, who just gave a talk. And the goal we want to kind of create a next-generation histological multi-model atlas - next slide, please - of the human brain. And here's sort of what we're doing. We start with the entire brain or pieces of brain. So so far we have five hemispheres from Johns Hopkins and from the NIMH, HBCC, thanks to Stefano Marenco. They're going to NYU for microstructure MRI, then we slab it. David is doing neuropathological assessment. We use the tape-transfer method to preserve 3D information, so we can reassemble these sections. We are scanning using a Huron whole-slide scanner.

And then the idea-- one of our main objectives is really connect across to neuropathology to neuroradiology and also to the research community, so act as a hub and produce a common coordinate framework that can also be used for the BICAN project for the human brain. So the next slide will show you - next slide please - some of the things that we are doing that are different. We are doing a Gallyas myelin stain. This is a very classic stain, but it produces beautifully resolved individual fibers. H&E stains which is a bridge to laboratory medicine to neuropathology to autopsies, biopsies. And also the microstructure MRI which hasn't really been done elsewhere. And again, we are wanting to correlate the histology and the MRI together with an eye eventually towards in-vivo diagnosis. Next slide, please.

And one of the important things is to preserve the 3-dimensional object which is the human brain. You have to cut it into slabs. I should mention, parenthetically, that we can freeze pretty large chunks, so we can actually freeze a 21-week-old fetal human brain without slabbing it at all, so. But for the adult human brain, we do have to slab it. We cut pretty thick slabs, two to three centimeters. We use custom 3D-printed molds, image it in the mold, slab it in the mold, take it out, cryoprotect freeze it, and then we section using the tape-transfer method. Next slide, please. And this allows us to put together information from adjacent sections. So you see the Nissl, myelin, H&E on the left. This is a piece of the human amygdala. On the right, you see the MRI from the same sample. Next slide, please.

And here Daniel has reassembled. So imagine we've got a contiguous set of sections. We are doing a series of four Nissl, myelin, H&E, and we are saving one for antibody which we want to differentiate inhibitory from excitatory neurons and also vascular stain, so that's our goal. But what you can see, look at the reconstructed myelin, that's actually reconstruction. So you can see the 3-dimensionality of the data, and we can place this into the MRI framework in the MNI space. Next slide, please. And this is a human brain stem sample. This is the MRI. Next slide, please. And here is the myelin and the Nissl. And we are basically warming up to do the entire brain and-- hemisphere first, and then we also have a protocol to do the whole brain. The tricky part is to do the neuropathological assessment on the whole brain. So one thing I want to note at the end of my talk is, please keep in mind –

NED TALLEY: One minute.

PARTHA MITRA: – the reason neuropathology uses H&E as its basic thing is that it's a very low-cost and very effective technique. So Nissls and H&Es are very cheap compared with antibodies or more, so when thinking about scaling that will be important. And for atlas mapping the cytoarchitectonics is important. So that's all I have to say, and thank you.

NED TALLEY: Thanks, Partha. Great talk. Next up now, we do have Peng Yin. Dr. Yin, are you on?

PENG YIN: I think I'm on. Can people hear me?

NED TALLEY: Yes. We can hear you. Please proceed.

PENG YIN: So somehow the video does not – I don’t understand why – okay, that's good now. Yeah. Thanks, Ned. And I also want to thank you and Laura for organizing this wonderful workshop. So yes. So I'm going to share some of our work on developing reagents to facilitate a high-plex brain imaging. So shown in panel A here is the traditional fluorescent immuno-imaging. And the idea is that you stain your protein markers with a primary antibody and then you stain antibody. Then you attach fluorophore modify, fluorophore label second antibody to alpha signal to enable robust fluorescent imaging. So problem with the traditional fluorescent imaging is that it has limited multiplexing. Typically, it's three to four channels. And to overcome this spectral limit and one intuitive idea is to do multi rounds of sequential imaging. And one mouse is shown in panel B here, where people take sequential rounds of primary antibodies, staining, imaging, and stripping, and then restaining, imaging, and stripping again. So this actually work well, but the problem there is that primary antibody staining for the protein markers are slow step, can take hours, and more commonly overnight. So multiple rounds of slow primary antibody staining is a bottleneck for this kind of method.

To overcome this staining bottleneck we reported the idea of DNA-exchange imaging where we use DNA-barcoded primary antibody to stain all the protein targets in one single slow round. And this followed by multiple rapid rounds for label in the strain and exchanged to enable multi-rounds of rapid sequential imaging. So the idea of a DNA-exchange imaging, that is one single slow round of staining of targets with DNA barcode of the affected regions followed by multiple rapid rounds of image exchange, is a nice and widely adopted for sequential protein imaging, and also for sequential, and also RNA imaging.

If you compare panel C, DNA-exchange imaging, with panel A and B, one thing is missing there. The missing element is a second antibody signal application. And this is undesirable because lower signal means potentially compromise imaging quality, restricted applicant scenario, and also potentially lower throughput. So to bring back signification in panel D we describe the idea of SABER in signal amplification where we use DNA concatemers, presynthesized concatemers, bond to the primers on the primary antibody. Each concatemer can recruit multiple image strain, and essentially these SABER concatemers can serve as a secondary antibody analogs to enable high-plex and amplified imaging with the risk method for 15-20 plex.

More recently, in panel E, we developed another method we call in situ extension. So idea is that rather than using presynthesized concatemer amplifier, we use a novo cyclic chemistry to in situ stain the primer to in situ synthesize the long concatemer. And this allows potential much deeper tissue penetration, and does simplify the workflow, also allows a higher multi-plex amplification. We demonstrated 30 plex amplified imaging. Are now working towards 100 plex. So the high-plex amplification is relevant I think particularly for the high throughput brain imaging offered here, as higher on the high-plex amplification can enable varied-plex signal in deep tissues which can increase imaging quality and potentially allowing a faster scanning speed and a higher signal can also potentially reduce reagent cost. And finally, high price single style amplification can simplify logistics and potentially reduce labor costs. Next slide, please.

NED TALLEY: One minute.

PENG YIN: Yeah, so thanks. So shown here are the detailed workflow for ISE and a lot of details here, but this can be applied to imaging both protein RNA. Shown on the right are 32 plex amplified imaging over 29 protein markers and three RNA markers in mouse brain. We also have some prelim results on 16 plex amplified protein imaging human brains and some initial deep tissue data. On next slide. Right, so just acknowledgement and many technical members are in the lab who done this amazing work. In particular, Ralf Jungmann and Yu Wang for pioneering DNA-exchange imaging. Sinem Saka, Jocelyn Kishi, and Yu Wang for SABER amplification, and more recently, Kuanwei Sheng and Han Su for leading innovation for ISE high-plex amplification. Funding agency collaborators and –

NED TALLEY: Okay, I think we got to cut you off there. Thank you so much. Interesting work. I believe next up we have Xiao Wang. Am I right about that? Yes, Dr. Wang, you're on.

XIAO WANG: Yeah.

NED TALLEY: Okay, please proceed.

XIAO WANG: Okay, great. Hello everyone. I'm really excited to be here to share with you our recent work in the past three years. Next. So our story starts from the development of the STARmap, which is a 3D in situ transcriptomics approach. So STARmap basically consists of three major steps. The first is a design of a pair of primer and padlock probes where we can conduct a very specific signal amplification to have really thousands of copies of the same, like probe barcodes. And then the next we can do hydrogel embedding and in combination we could drastically increase the signal-to-noise ratio in brain tissues. And our approach is totally compatible with both thin brain tissues and thick brain tissues due to this extremely amplified signal-to-noise ratio.

And finally, we have a two-phase encoding approach to have a robust decoding process and error correction. So in combination, through the development of STARmap in the past few years, we have a robust pipeline to conduct brain cell typing in mouse brain and nonhuman primates routinely at the scale of thousands of genes. And after that, we further developed a versatile and robust computational pipeline for the downstream data analysis. As many of the raw data we are handling, is a three-dimensional data and there's a lacking of computational pipelines that are directly compatible with the 3D spatial transcriptomics.

So the pipeline we developed is called a ClusterMap. It's a theoretical clustering approach where these algorithms first clustering individuals are in molecules detected by STARmap into individual cells. And the second, based on individual cells, conducts another level of spatial clustering to identify molecularly defined tissue regions, and many of those, as I will show you, have a very good correspondence with brain anatomy. And this approach is also totally compatible with many other technologies such as MER-FISH and others like In Situ Hybridization based on the detection of RNA molecules. Next.

In combination of these two pipelines, we also conducted a brain-wide mapping of brain cell types using 1,022 genes. And in combination with available single cells sequencing data sets, we achieved a very comprehensive mapping of the brain cell types and fully registered them with the Allen Institute's Common Tissue Coordinate Framework. We also conducted the characterization of molecularly defined tissue regions and benchmarked these molecularly defined tissue regions with existing brain anatomy where we can see many interesting spatial patterns. So because of the presentation of Xiaowei and Hongkui, which is a fantastic talk, I want to spend more time on this topic of this mouse brain atlas. I just want to further introduce some unique results we obtained through this process. Next.

Because of the availability of these spatially resolved single cell brain mapping, there's more and more data and annotation available for molecular cell types. But what's next is, we want to really make connections with brain connectome as well as animal behaviors. That is to say, we need really important ways to linking molecular cell types with genetic manipulation techniques so that we can come back to such as fluorescent labeling or timecode genetics or optogenetics type of manipulation of individual molecular cell types. So in order to achieve so, we also included this mapping of this brain-wide AAV transfection with a particular string of PHP.eBs that can assess across the blood-brain barrier. Next. And then we could actually resolve these AAV tropism across all the molecular brain cell types we have defined in this study. And finally I just want to spend a little bit of time to introduce a new technology we developed in our lab where we try to detecting activity translating MRA. Next.

NED TALLEY: You're coming up on the end. It's got to go quick.

XIAO WANG: Yeah. Okay, cool. So we call it a RIBOmap. I think RIBOmap is a really nice linking between the transcriptome and the proteome. The key development we have here is to specifically amplify this ribosome attached to mRNA as a read out of the so-called translatomics. Next. And using this RIBOmap approach, although we are only sampling a subset of the transcriptome, but they are the most functionally related estimation of the nascent in the proteome. So RIBOmap is sensitive enough to, again, create these molecularly defined brain cell types. Next. But more importantly, we have a direct approach to study these translational regulations inside of the cells, including cell type-specific translational regulation as well as this dendritic or other glial processes, localized translation events. Okay. That's all.

NED TALLEY: Okay, we really got to cut you off because people are going to be mad at us for cutting into their break. Thanks so much. It's great work. Next up is Xiaoyin Chen, another junior investigator from Allen Institute. Xiaoyin, are you on?

XIAOYIN CHEN: Yes, I'm here.

NED TALLEY: Okay, please proceed.

XIAOYIN CHEN: Hello, everyone. So I'm going to talk about in situ sequencing in the context of a brain-wide scale interrogation. Next slide, please. So we originally developed this technique called BARseq, this way to map long-range projections with high throughput and single cell resolution. And we do this by in situ sequencing of RNA barcodes. So when I was a post doc in Tony Zador's lab, we applied BARseq to a number of different circuits, and these are described in these studies on the bottom left corner. But then we realized that we could actually use the same technique to also just look at endogenous gene expression. And this turned out to be a really high throughput and a lost-cost way to do spatial transcriptomics.

So in this paper, shown on the right side, we just uploaded this to Biorxiv last month. We used BARseq to target the expression of 107 genes across one mouse hemisphere. We targeted only 107 genes because we were interested in the cortex, and these genes are optimized for resolving cortical cell types. But in other applications, we can do many more genes than that. So BARseq has several advantages that I think are relevant to mapping large brains. Without going into the technical details of how BARseq works, I just want to highlight that BARseq is a very high throughput. For example, this data set with 40 coronal sections, we collected this in seven days, and this was done a year ago. Now we can actually do this in one or two days. This is a very low cost of $3,000 for the data set, and now it's actually cheaper. And BARseq is also compatible with tape-transfer systems that people use to do cross-sectioning. This is especially useful for sectioning large brains to get consistently good sections. Next slide, please.

So in this slide, we first ask can we resolve cell types. So we clustered our gene expression. We mapped our BARseq clusters, as shown on the left. Two single cell RNAseq clusters from other data set show at the bottom. And you see this diagonal line which means that BARseq clusters were able to resolve gene expression differences across single-cell RNAseq cell types at the finest resolution. Next slide, please. So then we can ask how cell types distribute over a whole cortex. And on finding we had was that even though many cell types can be shared across multiple cortical areas, the ratio of cell types within the area or we call these the compositional profiles of cell types, can be quite distinct. For example, on the left, I am showing you a few cell types in a cortex. If you focus on the blue one at the bottom, you can see that this cell type is present from the SSP trunk area all the way to the secondary somatosensory cortex, but the ratio between that blue type to the green type and the orange type are actually quite different from area to area.

So then we can ask which areas are more similar to each other in cell types, and we cluster these areas, and these are shown on the right. And they turn out that the areas that are clustered together, so they have similar cell types, these are also the same areas that are more highly connected to each other based on previous connectivity studies. So based on this, slide two, mesoscale wiring throughout cortex, which is that cortical areas where similar cell types are also highly connected to each other. Next slide, please.

So far that was that was all done in mice, but we are also pushing to do BARseq in-- to study non-human primates and in humans. So on the left, I am showing you the same kind of experiment in cell typing in macaque cortex. You can see nice layered structures of cell types in the macaque cortex. We are also doing projection mapping in macaque, and this is done in collaboration with Greg Horwitz at UW. So this involves not only sequencing gene expression, but also RNA barcodes that we used to label these neurons. So in this experiment, we barcode the neurons in –

NED TALLEY: One minute.

XIAOYIN CHEN: – V1 and V2, and we map projections to each other and also to other brain regions. And on the right, I'm showing you two example neurons from this data set. So the little blue and yellow dots indicate RNA barcodes we found in the axons. So here you can see one neuron projects from V2 to V1 and one neuron projecting from V1 to V2. And on the right, I'm showing you that we can also do this in human tissue. So all the chemistry for in situ sequencing also works in human tissues. Next slide, please.