Skip to main content

Showing 1–11 of 11 results for author: Pillow, J W

Searching in archive stat. Search in all archives.
.
  1. arXiv:2502.18347  [pdf, ps, other

    q-bio.NC stat.ML

    Modeling Neural Activity with Conditionally Linear Dynamical Systems

    Authors: Victor Geadah, Amin Nejatbakhsh, David Lipshutz, Jonathan W. Pillow, Alex H. Williams

    Abstract: Neural population activity exhibits complex, nonlinear dynamics, varying in time, over trials, and across experimental conditions. Here, we develop Conditionally Linear Dynamical System (CLDS) models as a general-purpose method to characterize these dynamics. These models use Gaussian Process (GP) priors to capture the nonlinear dependence of circuit dynamics on task and behavioral variables. Cond… ▽ More

    Submitted 30 October, 2025; v1 submitted 25 February, 2025; originally announced February 2025.

    Comments: 24 pages, 7 figures. Associated code available at: https://github.com/neurostatslab/clds. To appear at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025)

  2. arXiv:2303.02060  [pdf, other

    stat.ML cs.LG

    Spectral learning of Bernoulli linear dynamical systems models

    Authors: Iris R. Stone, Yotam Sagiv, Il Memming Park, Jonathan W. Pillow

    Abstract: Latent linear dynamical systems with Bernoulli observations provide a powerful modeling framework for identifying the temporal dynamics underlying binary time series data, which arise in a variety of contexts such as binary decision-making and discrete stochastic processes (e.g., binned neural spike trains). Here we develop a spectral learning method for fast, efficient fitting of probit-Bernoulli… ▽ More

    Submitted 26 July, 2023; v1 submitted 3 March, 2023; originally announced March 2023.

    Comments: Published in Transactions on Machine Learning Research (https://jmlr.org/tmlr/papers/)

    Journal ref: Transactions on Machine Learning Research (2023)

  3. arXiv:2202.13426  [pdf, other

    cs.LG q-bio.NC stat.ML

    Bayesian Active Learning for Discrete Latent Variable Models

    Authors: Aditi Jha, Zoe C. Ashwood, Jonathan W. Pillow

    Abstract: Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by pro… ▽ More

    Submitted 2 June, 2023; v1 submitted 27 February, 2022; originally announced February 2022.

    Comments: 38 pages (including references and an appendix), 7 figures in main text

    Journal ref: Neural Computation (2024), 36 (3): 437-474

  4. arXiv:2201.03128  [pdf, other

    stat.ML cs.LG

    Loss-calibrated expectation propagation for approximate Bayesian decision-making

    Authors: Michael J. Morais, Jonathan W. Pillow

    Abstract: Approximate Bayesian inference methods provide a powerful suite of tools for finding approximations to intractable posterior distributions. However, machine learning applications typically involve selecting actions, which -- in a Bayesian setting -- depend on the posterior distribution only via its contribution to expected utility. A growing body of work on loss-calibrated approximate inference me… ▽ More

    Submitted 9 January, 2022; originally announced January 2022.

  5. arXiv:2006.11412  [pdf, other

    cs.CV cs.LG stat.ML

    High-contrast "gaudy" images improve the training of deep neural network models of visual cortex

    Authors: Benjamin R. Cowley, Jonathan W. Pillow

    Abstract: A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model of responses from visual cortical neurons. Deep neural networks (DNNs) provide a promising candidate for such a model. However, DNNs require orders of magnitude more training data than neuroscientists can collect from real neurons because experimental recording time is severely… ▽ More

    Submitted 13 June, 2020; originally announced June 2020.

  6. arXiv:2001.04571  [pdf, other

    q-bio.NC stat.ML

    Unifying and generalizing models of neural dynamics during decision-making

    Authors: David M. Zoltowski, Jonathan W. Pillow, Scott W. Linderman

    Abstract: An open question in systems and computational neuroscience is how neural circuits accumulate evidence towards a decision. Fitting models of decision-making theory to neural activity helps answer this question, but current approaches limit the number of these models that we can fit to neural data. Here we propose a unifying framework for modeling neural activity during decision-making tasks. The fr… ▽ More

    Submitted 13 January, 2020; originally announced January 2020.

  7. arXiv:1906.03318  [pdf, other

    stat.ML cs.LG

    Efficient non-conjugate Gaussian process factor models for spike count data using polynomial approximations

    Authors: Stephen L. Keeley, David M. Zoltowski, Yiyi Yu, Jacob L. Yates, Spencer L. Smith, Jonathan W. Pillow

    Abstract: Gaussian Process Factor Analysis (GPFA) has been broadly applied to the problem of identifying smooth, low-dimensional temporal structure underlying large-scale neural recordings. However, spike trains are non-Gaussian, which motivates combining GPFA with discrete observation models for binned spike count data. The drawback to this approach is that GPFA priors are not conjugate to count model like… ▽ More

    Submitted 5 October, 2020; v1 submitted 7 June, 2019; originally announced June 2019.

  8. arXiv:1811.11684  [pdf, other

    cs.LG stat.ML

    Shared Representational Geometry Across Neural Networks

    Authors: Qihong Lu, Po-Hsuan Chen, Jonathan W. Pillow, Peter J. Ramadge, Kenneth A. Norman, Uri Hasson

    Abstract: Different neural networks trained on the same dataset often learn similar input-output mappings with very different weights. Is there some correspondence between these neural network solutions? For linear networks, it has been shown that different instances of the same network architecture encode the same representational similarity matrix, and their neural activity patterns are connected by ortho… ▽ More

    Submitted 16 March, 2019; v1 submitted 28 November, 2018; originally announced November 2018.

    Comments: Integration of Deep Learning Theories workshop, NeurIPS 2018

  9. arXiv:1711.10058  [pdf, other

    stat.ML

    Dependent relevance determination for smooth and structured sparse regression

    Authors: Anqi Wu, Oluwasanmi Koyejo, Jonathan W. Pillow

    Abstract: In many problem settings, parameter vectors are not merely sparse but dependent in such a way that non-zero coefficients tend to cluster together. We refer to this form of dependency as "region sparsity." Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), which model parameters as independent a priori, and therefore do not exploit such dependencies.… ▽ More

    Submitted 24 January, 2019; v1 submitted 27 November, 2017; originally announced November 2017.

    Comments: 42 pages, 15 figures, submitted to JMLR

  10. arXiv:1704.00060  [pdf, other

    stat.ML

    Exploiting gradients and Hessians in Bayesian optimization and Bayesian quadrature

    Authors: Anqi Wu, Mikio C. Aoi, Jonathan W. Pillow

    Abstract: An exciting branch of machine learning research focuses on methods for learning, optimizing, and integrating unknown functions that are difficult or costly to evaluate. A popular Bayesian approach to this problem uses a Gaussian process (GP) to construct a posterior distribution over the function of interest given a set of observed measurements, and selects new points to evaluate using the statist… ▽ More

    Submitted 29 March, 2018; v1 submitted 31 March, 2017; originally announced April 2017.

    Comments: 20 pages, 8 figures

  11. arXiv:1610.08465  [pdf, other

    stat.ML q-bio.NC

    Bayesian latent structure discovery from multi-neuron recordings

    Authors: Scott W. Linderman, Ryan P. Adams, Jonathan W. Pillow

    Abstract: Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties. However, traditional methods for dimensionality reduction and clustering are ill-suited to recovering the structure underlying the organization of neural circuits. In particular, they do not take advantage of the rich temporal dependencies in multi-neuron recordings a… ▽ More

    Submitted 26 October, 2016; originally announced October 2016.

    Comments: 11 pages, 5 figures, to appear in Advances in Neural Information Processing Systems 2016