Skepticism about synaptic nanocolumns

Couple of days ago I’ve come across a recent paper in Nature with quite an eyebrow-raising title “A trans-synaptic nanocolumn aligns neurotransmitter release to receptors“. The title made me think as if the authors have observed hitherto unknown structures in the synaptic cleft. That would be quite a sensation! But then this sentence comes in the abstract:

These presynaptic RIM nanoclusters closely align with concentrated postsynaptic receptors and scaffolding proteins4, 5, 6, suggesting the existence of a trans-synaptic molecular ‘nanocolumn’.

Now it looked like they propose some kind of neuroscience equivalent of dark matter. You know, something that nobody knows what it is, but that certainly must be there, otherwise there’s no explanation to what we see.

So let’s move on to that ‘see‘ verb because that’s what the paper is all about. According to the authors, the article is the first application of super-high resolution microscopy for imaging of living neurons (cultured in a Petri dish and taken from murine hippocampus). The group of Thomas Blanpied at The University of Maryland did a pretty exciting job by setting up a bunch of state-of-the-art fluorescent microscopy experiments for visualizing neurons in action. They used PALM, STORM, and 3D variation of the latter.

Quite surprisingly, the paper is not overloaded with pretty pictures of neurons but rather with dull red and blue swarms of points that represent labeled proteins of interest. It is from these swarms the authors made their major conclusion of existence of synaptic nanocolumns that align neurotransmitter release sites to corresponding receptors and deeper postsynaptic machinery. So the major method of the paper is sophisticated data analysis rather than direct microscopic observation.

RIM1/2 (red) and PSD-95 (blue) are forming synaptic nanocolumns in cultured neurons


The analysis is sophisticated indeed. The problem is that the data are very complex and noisy. So noisy that ordinary human eye (e.g., mine) won’t recognize those ‘nanocolumns’ even with all authors’ effort with highlighting them in figures and supplemental video 1. They developed computational algorithms for data processing in order to put some numbers on those protein swarms. As a non-data-scientist, I really enjoyed untangling the logic behind each step of the analysis. Clearly, the authors registered co-localized continuous inhomogenieties  in distribution of several synaptic proteins. Also they made enormous effort to prove that those inhomogenieties are not imaging artifacts and to reach statistical significance. The effort so enormous that a reader can be distracted from the actual ‘signal’ size, which is quite small in some cases.

For some conclusions the authors had to invent entirely new experimental (pHuse for imaging single vesicle fusions) and data analysis protocols, which is not uncommon when one does something that nobody has done before. But overall I was left in doubt if these ‘nanocolumn’ structures are a feasible idea or just a provocative concept in Professor Blanpied’s head, which is nicely illustrated in another supplemental video (also on YouTube).

On the next day after the paper appeared in Nature, someone with Baltimore IP address added the extensive (second) definition of ‘nanocolumn’ to Wiktionary that also explains where the whole ‘column’ buzz stems from. So it looks like the lab will push this idea forward very hard.


P.S. I also liked this passage from Methods section, which is basically a handy step-by-step manual for any statistical data analysis (although I feel that it’s more like a generic placeholder since most of the statistical tests are not mentioned anywhere else in the paper):

For comparison of two or more distributions, all samples were assessed for normality using Shapiro–Wilk or Kolmogorov–Smirnov tests. If samples met criteria for normality, we used a Student’s t-test to compare two groups, a paired t-test for comparison of the same group before and after a treatment, or ANOVA for more than two groups. If ANOVAs were significant, we used a post hoc Tukey test to compare between groups. For groups with combinations of discrete and continuous variables, we used MANCOVAs. We only performed two-tailed tests. Homogeneity of variances was tested using an F-test and found to be similar between compared groups. If samples did not meet criteria for parametric tests, we used Kolmogorov–Smirnov or Wilcoxon rank-sum tests for comparison of two groups and Kruskal–Wallis or Friedman ANOVA for comparison of more than two groups, with post hoc analysis using Dunn’s test.



Author: Slava Bernat

I did my PhD in medicinal chemistry/chemical biology of G protein-coupled receptors and then explored some chemical biology of non-coding RNA as a postdoc. Currently I'm working in a small biotech company in San-Francisco Bay area as a research chemist. I'm writing about science, which catches my attention in rss feed reader and some random thoughts or tutorials.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

profiting from randomness

R-statistics blog

Statistics with R, and open source stuff (software, data, community)

Colorblind Chemistry

The blog of Marshall Brennan, PhD

ACS Careers Blog

Career advice from the American Chemical Society

Lab Without Benches

Career skills for scientists


forcing molecules to behave


mostly science

Org Prep Daily

synthetic procedures I tried and liked

Sussex Drug Discovery Centre

Medicinal, Chemistry and Biochemistry blog from the Sussex Drug Discovery Centre

Practical Fragments

mostly science

Chemical connections

...chemistry & other curiosities

Just Like Cooking

mostly science

%d bloggers like this: