- Why BigBrain data set is so important for cytoarchitectonic studies?
The BigBrain model itself, among other goals (please, use this link for more information: BigBrain Project) was developed to be used as a publicly-available source of microscopical human brain data at “nearly cellular resolution”. However, to the best of my knowledge, the uniqueness of the BigBrain is not in its resolution which, as it stays for now, is rather far from cellular. The uniqueness is in its continuity: this is uninterrupted sequence of 7404 serial sections. There are several publicly-available sources of microscopic data of human brain at much higher resolution (for example, this: The Human Brain or this: Allen Reference Atlas,) but all of them are “interrupted sequences”, which means that huge gaps exist between consecutive sections.
As we know, cortical plate is a complex three-dimensional structure. Two-dimensional section inevitably demonstrates regions where cortex is cut under oblique angle, thus presents a distorted image of cortical traverse. Only small fraction of any section can show cortical structure perpendicularly to the surface and co-planar with orientation of columns. Obviously, the only possible way to look at the cortical structure in its true “unbiased” presentation is high-resolution reconstruction (1 micrometer or better) in 3D space. Availability of uninterrupted sequence of sections is a major prerequisite for achieving this goal.
So, as we should understand, the BigBrain data set is the result of a tremendous effort. This effort is truly heroic, considering the amount of work and dedication it required to prepare, digitize and process all these sections, let alone – correcting cutting artifacts in each and everyone of them. Additional resources had to be allocated to store of all 7000+ sections and dedicate some rather powerful servers to make image data publicly available. This part was provided by Montreal Neurological Institute of Mc.Gill University, Canada.
Now, let’s look more closely at the HBP’s BigBrain Viewer posted specifically to show this unique sequence of sections.
2. What is wrong with the HBP’s “BigBrain Viewer”?
I decided to post the description of “Qute Brain Browser” after I saw quite disappointing interface presented by HBP-sponsored BigBrain Viewer, developed using Neuroglancer (open-source WebGL viewer for volumetric data). To avoid possible misunderstandings it has to be emphasized that I do not discuss here the code, or other implementation examples based on Neuroglancer code. The discussion that follows only applies to HBP’s BigBrain Viewer. By the way, anyone can check all of it using this link: Explore the BigBrain online
Additionally, recently I was given the opportunity to test and use a special edition of the “Atelier3D” – an application developed by National Research Council of Canada (NRC). It is also dedicated specifically to visualize BigBrain dataset, but unfortunately 3D functionality of A3D application shares the deficiency of HBP’s BigBrain Viewer. This created an additional motivation for posting these notes.
One has to realize, that HBP’s BigBrain Viewer is created to demonstrate the result of several years of work of a large group of scientists, neuroanatomists and engineers, who represent the biggest and the most significant project in the history of Neuroscience, that is not to mention the level of funding, which supposedly reaches one BILLION euros. It was developed to demonstrate ultra-high resolution model of human brain, codename “BigBrain”, which is described in a paper published five years ago: http://science.sciencemag.org/content/340/6139/1472.
In IE it shows a black screen. No warnings, no messages of any kind, just a black screen. ***04/01/2019–Correction: recently this problem was fixed: instead of a black screen it shows a message “Please, use Chrome of Firefox”, or something of that kind. Good!***
In Chrome and Firefox it initially shows low-resolution images as expected. The screen consists of four panes. Three are for coronal, sagittal and horizontal 2D sections, which work fine. They allow for rather quick increase of the image resolution, sections can be easily dragged around, and even rendered at any arbitrary angle (a nice feature, especially for cortical regions, to see layers with correct section orientation). So, except for some significant registration artifacts, which are hardly the viewer developer’s fault (see, for example, Fig. 1 below), and a black screen in IE, one should not have serious complaints regarding 2D functionality of this viewer implementation.
Fig. 1: Significant registration artifacts in BigBrain dataset.
The fourth pane is for three-dimensional viewing, and actually it is the source of all problems. 3D pane shows strangely-colored cortical surface (mesh) with three orthogonal 2D sections. Strangely enough, 3D surface mesh does not include the cerebellum, so it is shown only as a single section, which makes a very strange impression (see Fig. 2 below).
Fig. 2: Where is cerebellum?
The 3D panel is shown in the image above with its maximal level or resolution. It can be “zoomed-in” more, but the resolution does not increases. So, after “zooming in” 3D image becomes even stranger. It looks quite intriguing, but hardly useful.
Fig. 3: No comments.
Its resolution is not “synchronized” with 2D sections’ resolution, and is not even close to 20 microns. 3D sectors cut-off can be switched on and off (“x” keyboard key), but it freezes quickly enough after few attempts. Time permitting, I would love to continue this list of complains, but the conclusion is obvious: 3D viewer functionality of the application is grossly inadequate. It might impress some naïve user, but it will hardly be useful for any serious purpose, as declared in the publication mentioned above, let alone – in the fundamental documents of the Human Brain Project.
In other words, the bottom line is: the “BigBrain” model deserves better viewer!
So, how should I to explain the obvious: why 3D visualization is crucial for brain data? There are plenty of reasons, and their are different at macro- and microscopical levels. As any medical student would agree, neuroanatomy is a tough subject. One of the reasons is quite complex spatial relationship between different parts of the brain, which requires looking at the same part at different angles, making many cuts at different levels etc. However, as anyone with experience in handling human brain during postmortem examination should learn soon enough, any brain cut is final and irreversible. As we used to say “think ten times, cut only once”. However, multiple cuts are needed at different levels and at different angles to understand the topography and special relationships between different parts of the brain, and any brain atlas or neuroanatomy textbook proves it quite clearly. So, one cut could be quite incompatible with another. For learning brain anatomy many samples of different brains have to be carefully prepared, what is very difficult to achieve in practice. The BigBrain model solves this problem: you can cut it any way you want, as long as you have the adequate software tool. 3D presentation of the brain with ability to rotate it, dynamically cut and inspect the model from different angles creates a unique potential to be used as brain anatomy learning tool. However, as I suspect, the developer(s) who implemented HBP’s 3D viewer probably have a lot of experience with processing MRI data, but have very little or no experience with handling the real brain. Additionally, they probably have little-to-none supervision by someone who has such experience. The result is quite poor: as far as it looks for now, the HBP’s BigBrain Viewer as a tool for learning the anatomy of the human brain is a lost opportunity.
Of course, the “boundary” between macro- and microscopic image resolution is fuzzy, but microscopical level is quite different. Unfortunately, the BigBrain model is not yet at microscopy level (20 micron resolution is only the beginning of microscopy scale), and there is very limited prior experience with rendering such a huge image volume at this level! Even though 3D microscopy (confocal microscopy, or scanning electron microscopy) existed for many years, the “scale” (or size) of observed 3D image remained all these years rather small. In my experience, due to the high density of cellular elements it is difficult to select an adequate parameters of camera, lighting conditions etc. to render a representative volume of human cortical area in 3D at 1 micron-resolution level. At least, my own first attempts do not look yet attractive enough (see Figs. 10 and 11 below).
Additionally, visualization technology that exist today is not ready to browse and render the image that presently occupies 100+ GB on disk, and easily can reach terabytes or more in the near future. In addition to the size problem, microscopy image presents another challenge. With increasing resolution (or magnification) its sparsity dynamically changes. So, rendering such an image requires solving a difficult problem: either we have to make a new mesh “on the fly”, or it is necessary to change the transparency threshold each time the resolution changes. But regardless of implementation details, this only means that “3 times 2D” – style viewer is even less adequate at a microscopical level than it is at a “macro” level.
With this in mind, I will try to demonstrate some examples of alternative implementation of 3D viewer in a context of different examples of expected functionality.
3. How 3D viewer should look like and what it suppose to do, using “Qute Brain Browser” as an example.
To begin, let’s ask a naïve question: is there any difference between 3D and “multiples of 2D”?
Fig. 4: 3D (left) versus “3 times 2D” (right)
To make a long story short, I should say “3D” (left image) is convex, “3 times 2D” (right image) is concave. It is not very eloquent, but should be clear enough. In other words, 3D panel in “BigBrain Viewer” should look “the other way around”, because presently, HBP’s BigBrain Viewer cuts off the observer’s target. It cuts out exactly what it has to display!