This is a very good point, also raised by our reviewers. It is true that the class distinction is initially based on the time point of a cell’s field (during the odor delivery, versus during the delay) and in that sense is arbitrary. But this distinction is then supported in a phenomenological way by the differences we find in stability, adaptability and shaping during learning. However these are not enough to safely conclude that these cells are indeed separate classes of neurons. We are thus currently working on some follow up experiments, including the out-of-context odor presentation suggested here, to further investigate if these are truly two separate but interdependent classes or whether they are in fact, simply a sequence of cells initiated by the odor-presentation.
- The authors report better decoding performance from classifiers trained on odor cells compared to those trained on time cells. However, Fig. 3A suggests that there are significantly more odor cells than time cells, so the improved performance may simply be due to higher dimensionality. Are the classifier results consistent if the odor cell population is downsampled to match that of time cells? This logic also applies to the analyses assessing field stability across days. Is there really more instability in late time cells or is this a consequence of there being a smaller sample of time cells late in the delay?
The numbers of odor versus time-cells are compared in Figure 2D (bottom) and are actually similar (in fact one sequence yields significantly less odor-cells on average). Figure 3A does not suggest a lower number of time-cells, but rather that their distribution decreases over the delay period, so that late time points are represented by less time cells than early ones. This should not affect our analysis however, since in SVM classifications we consider all odor-cells vs all time-cells and in Bayesian decoders the whole sequence is used to decode at each time-point. We therefore do not expect that the improved decoding performance of odor-cells is due to their higher number, even though the suggested downsampling control was not explicitly performed.
The reason why time-cells yield lower decoding performance is demonstrated better in Figure 3B. There, we show that these cells (particularly later in the delay) are more sparsely activated across trials and have a larger variability in spiking time. In contrast, odor-cells spike significantly more often and within their field (i.e. during the odor-presentation), yielding much better decoders.
Regarding instability across days: Indeed less time-cells exist in late delay points than in earlier ones, but we don’t expect that their distributions of field-shifts are affected by their sample size, to the extent that would question the finding. We believe it is evident from displays of the distributions (like in Fig 3G), that by randomly picking subsamples of early vs late time-cells, the difference in average shifts will still hold. We would also like to add here that, even when normalizing these shifts by the maximum shift a cell could exhibit (e.g. a cell in the middle of the delay could not exhibit a shift a large as a cell in the very beginning or end of the delay) our results still hold. We will explicitly state this in the revised manuscript.
- Fig. 2C shows distinct sequences for each odor, so in a sense, the entire sequence ensemble is composed of odor cells. It may be more accurate to say that most cells involved in the task are really conjunctive odor-time cells.
We chose to name cells that encode time-points in a delay following a particular stimulus as “time-cells”, as in MacDonald et al. 2013. As you suggest here, these could also be called “odor-time cells” since they are odor-specific.
- The authors present interesting data on the development of sequences over learning and non-learning. Figs. 6 and 7 were our favorites out of the whole manuscript. This aspect of temporal sequence-building has never been described in much detail and the community would benefit from more analyses from this portion of the experiment. For example, Alcino Silva’s work suggests that more excitable cells are more likely to be recruited to a memory trace. Do cells with higher calcium activity tend to enter the sequence over learning? Are sequences generally more plastic early in learning? We think that the manuscript can grow stronger from focusing in this area as other studies have investigated stability of sequence over days (Mau et al., 2018) and how changing the delay duration affects time cells (MacDonald et al., 2011). This angle would also diminish what we believe to be potentially problematic interpretations of odor selectivity versus temporal selectivity outlined above.
These are indeed some great questions. We plan to delve deeper into this line of investigation in a future study that will be more focused on the emergence details during learning. For this one, our goal was to focus on the overall stability/adaptability/emergence differences between the two encoding categories and demonstrate how CA1 circuits allow for two encoding regimes.
- In analyses involving comparisons of fields across days, cells that lost their field were considered “unstable”. That means that analyses that look at the field shift of these unstable cells are comparing a field on one day and a non-field on another. Therefore, field shift analyses in Figs. 4 and 5 should not compare between stable and unstable cells since this comparison is unfair and not informative. If a field disappears, the authors should not be able to say that the field “shifted”. Along those lines, we would recommend removing the blue data points (as in Fig. 4F) from these analyses since they demarcate a cell’s non-field, which has questionable informational content.
It is true that non-significant fields (simply peaks in mean activity) are not carrying statistically sound information. We simply use them as a proxy of where cells “tend” to fire once they lose their field. We show this information for completeness and to draw a better distinction with cells that retain a stable field, though a direct comparison is avoided. We will examine the text and make any necessary changes to explain this point better.
Minor concerns
- A figure describing the performance of the mice in this task is needed early in the manuscript.
We initially included one but it carried essentially the same information as Figure 6A so we removed it.
- In Figure 2C, there is an interesting “shadow sequence” following delivery of the second odor, possibly due to relaunching of specific odor-sequences. But this is likely obscured due to some trials being odor A-odor A and others odor A-odor B (and vice versa). It may be beneficial to partition trials into the four odor A/odor B combinations to visualize odor-specific sequences more clearly.
The “echo” after the second odor is indeed due to the sequence repeating if the second odor is the same as the first. We are working on a supplemental figure on this.
- Regarding Fig. 2D, please comment on why there might be more activity associated with odor A on average (for both the odor delivery and delay).
We have no insight on this. It probably has to do with how the olfactory neural circuits (leading up to the hippocampus) deal with the two odors.
- In Fig. 3, paradoxically and interestingly, the PCA trajectories of the two sequences converge over the delay, yet they are distinguishable using the SVM. Please comment on this discrepancy. Perhaps it would be more suitable to use a multivariate distance metric across the trajectories rather than Euclidean distance to the PC trajectory.
We have a comment on the text on how PCA can be heavily influenced by sparseness in firing rates. This is why we actually shift from this technique and then stick to better decoding approaches for the rest of the text, although improved PCA methods could also be used, as you suggest.
- The middle and right columns in Fig. 3D are confusing. Consider placing them in supplemental figures instead.
- While interesting, the explanation for the analysis behind Fig. 3E is too brief. The authors should explain this analysis and its motivation in more detail.
Our goal here was to examine the decoding power of individual neurons instead of whole sequences or grouped odor vs time-cells. This could be done by testing how well can an SVM classifier distinguish odors through the activity of single neurons.
- In the first sentence of p. 15, the authors suggest that improved classifier performance over training was a result of increasing number of sequence cells, but this is not explicitly tested. If the sequence cells used to train the classifier were downsampled, is decoding constant over days?
This is a good point, that we did not explicitly test. Our conclusion was drawn indirectly by the fact the spiking properties of sequence cells did not change over days (Supp Fig 8), only their number.
- Please describe the passive exposure to the DNMS task in more detail in the main text.
These were naïve mice that were never trained to perform the DNMS task (or even lick the water tube). They were only water deprived for consistency. So when we exposed them to DNMS trials, they simply passively smelled the odors which had no task-context.
Other comments
- All x axes that say “Firing fields (sec)” would be better described with “Firing field times (sec)”.
- In the last paragraph of p. 3, it is not clear how the non-match-to-long-duration-sample task tests the mice’s reliance on lingering odor during the delay. Was the odor concentration during the delay measured to be below this 0.01% threshold?
- In the text, please make it more clear that a “sequence-cell” was a cell that was either an odor-cell or time-cell rather than simply stating that they are “in each sequence”.
- In the third paragraph of p. 8, “odor-cells had significantly higher activation probability and lower spiking variance that time-cells” should read “... than time-cells”.
- In the second to last paragraph of p. 9, “few time-cells recurred for more that 2 days” should read “...than 2 days”.
- In the third to last paragraph of p. 11, the last sentence beginning with “Odor-cell stability…” is grammatically problematic.
- In the first paragraph of p. 13, “decoded time the 10 sec trials” should read “decoded time during the 10 sec trials”.
- In the last paragraph of p. 29, “thought” should be “though”.
- In the first paragraph of the last page, “Chance baselines where computed” should be “Chance baselines were computed”.
Figure 1.
- If possible, replace the red or green channel with a different color to accommodate colorblind individuals.
- In the legend for E, state that the green and yellow bars indicate odor delivery.
Figure 4.
- In A, The cell ROI examples on the bottom are hard to see. Consider increasing the contrast in those insets to better display the cells. Particularly for the second ROI, there is a clearly visible cell right below the ROI that might serve as a better example.
Figure 6
- What are the y-axes for the Bayesian decoder in 6F and 6H? The legend says they are the same as in 6C/6D but the y-axes there are number of neurons. This issue appears to occur again in Figure 7. It looks like there is normalization occurring, so we suggest using z-scores or renaming the y axis to be less vague.
Figure 7
- The x axis of Fig. 7D appears to be mislabeled and should read “Days in training-stage” rather than “Performance”.
We will correct all mentioned typos and make the suggested alterations in figures and text. These are all great points and we are very grateful for bringing these to our attention!
-BU NeuroPreprint JC