Myriad is a next-generation parallel CPU and CUDA GPU-enabled simulator for biophysical (Hodgkin-Huxley) neurons and networks.  It is in an early alpha stage as of the end of 2016.  While the code is open on Github, we are not yet able to support any external users (until development has proceeded further).  Please contact Thomas Cleland if you would like to be considered for early testing.

Myriad is an open-source project, and we intend to integrate the best work of the free software community in its continuing development.  Watch this space for updates.

Why another simulator?
  • We work with realistic biophysical models of neurons and networks.  These models can quickly become quite computationally intensive, and hence require multithreaded computational tools and parallel processing.
  • Moreover, our models tend to be densely integrated — meaning that they have many analogue interactions such as gap junctions and graded synapses that require many model elements to update one another at every time step.  Sadly, such densely integrated models scale poorly on computational clusters, because of the number of updates that must be pushed across the slow (Ethernet) connections between cluster nodes.  We’re stubborn about our graded interactions because we try hard to make our modeling strategies serve the neurophysiological questions, rather than vice versa.
  • CUDA GPU hardware is a promising tool for densely integrated parallel simulations, but effective use requires a simulator designed specifically, from the ground up, for the strengths and limitations of GPU hardware.  Hence, Myriad.  (Myriad also executes in parallel on CPUs using OpenMP).
  • For spiking neural network simulations using parallel (CPU or GPU) computation, you may be interested in GeNN.
Design principles of the Myriad simulator
  • Computation is based on a radically granular low-level implementation framework enabling the “trivial parallelization” of any model, including models with dense interconnections not usually considered good candidates for parallelization.  Myriad removes all hierarchy from computational models at the implementation level, recognizing only two computational elements:  isometric compartments with passive properties and mechanisms that connect exactly two compartments in exactly one direction.  Neurons per se do not exist; charge and mass flow between adjacent segments of neurite are managed by reciprocal adjacency mechanisms.
  • An integral number of these computational elements can be executed on any available GPU core.  Communication is based on a uniform memory access (UMA) shared memory architecture, with each core having equal access to shared memory.  Occupancy is maximized by CUDA 5.0+ dynamic parallelism, and global synchronization is managed by GPU hardware (barrier synchronization).  Myriad is fully thread-scalable to any number of available GPU threads without explicit optimization by the user.  The end user does not need to write parallel code.
  • The reduced GPU instruction set limits the extensibility of GPU-enabled applications.  We have circumvented this limitation by developing the first object-oriented programming (OOP) model that runs natively on GPUs under CUDA, including on-GPU dynamic type inference and built-in inheritance (Rittner & Cleland 2014a).
  • Best practices in model definition (from the end user perspective) include the flexible rescaling of parameters such as model dimensions and numbers of compartments, the abstraction of neuronal regions into sections that share properties, the hierarchical segregation of neuron (template) definitions from network definitions, and the inclusion of familiar concepts such as an extracellular reference space.  To adhere to these best practices, Myriad incorporates a separate “user level”, written in Python, in which users construct models using tools from the top-level Python myriad modules and a defined subset of the Python (3.4+) language. Python metaprogramming is then used to generate “implementation level” C code on the fly for execution on GPU or CPU cores (Rittner et al., 2014).
  • Users retain full flexibility in model definition at the Python level because of the minimal OOP model constructed for the implementation level. This enables the end-user to define wholly novel objects and mechanisms such as ion channels (or ions), by defining fully extensible, user-defined models to be run on GPU. This innovation also enables models to be written identically for CPU and GPU execution, switching between the two via a compiler option.
  • We expect to release a public beta of Myriad in 2017-2018.  Updates will be posted here.
  • Myriad will likely use NIX, a rich file format for neuroscience data developed by the INCF G-Node, as its native file format.  NIX files can be based on HDF5 or host filesystem (directory tree) backends.  Other likely export filetypes include raw numpy arrays, NeoHDF, a structured HDF5 container for numpy objects defined by the Neo/NeoIO project, and the NWB file format presently in development, essentially an internal structure definition within HDF5.
  • Python toolsets for simulation governance, including multi-objective parameterization algorithms, will be incorporated directly into Myriad.
Publications to date
  • The myriad things is Angus Graham’s English translation of phrases found in the Chuang-tzu that refer to all of the objects, creatures, and mysteries that surround one at all times.  Its use here is meant to evoke both the overwhelming complexity of the neuroscientific tasks before us and the utilization of massive numbers of GPU cores to address some of the resulting challenges.