[go: nahoru, domu]

Skip to content

Releases: genn-team/genn

GeNN 5.0.0

22 Apr 13:06
Compare
Choose a tag to compare

Release Notes for GeNN 5.0.0

This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems.

This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models

New features

  • GeNN has a whole new code generator. This gives much better quality error messages to the user about syntax/typing errors in code strings and will enable use to do smarter optimisations in future but it does restrict user code to a well-defined subset of C99 (#595)**
  • As well as simulation kernels, GeNN 4.x generated large amounts of boilerpalte for allocating memory and copying from device to host. This resulted in very long compile times with large models. In GeNN 5 we have replaced this with a new runtime which reduces compilation time by around 10x on very large models (#602)
  • In GeNN 4.X, parameters were always "scalar" type. This resulted in poor code generation when these are used to store integers. Parameters now have types and can also be made dynamic to allow them to be changed at runtime (#607)
  • Weight update models now have postsynaptic spike-like events, allowing a wider class of learning rules to be implemented (#609)

Bug fixes

  • PyGeNN only really works with precision set to float (#289)
  • Refine global - register -global transfers (#55)
  • Avoiding creating unused variables enhancement (#47)
  • PyGeNN doesn't correctly handle neuron variables with delay slots (#393)
  • assign_external_pointer overrides should use explicitly sized integer types (#288)
  • Repeat of spike-like-event conditions in synapse code flawed (#379)
  • Dangerous conflict potential of user and system code (#385)
  • Accessing queued pre and postsynaptic weight update model variables (#402)
  • Linker-imposed model complexity limit on Windows (#408)
  • Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416)
  • Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566)
  • Presynaptic Synapse Variable undefined in Event Threshold Condition (#594)

GeNN 5.0.0 RC1

19 Mar 11:53
Compare
Choose a tag to compare
GeNN 5.0.0 RC1 Pre-release
Pre-release

Release Notes for GeNN 5.0.0

This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems.

This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models

User Side Changes

  • Named parameters by (#493)
  • Transpiler (#595)
  • Variable dimensions (#598)
  • Dynamic loader (#602)
  • Replace implicit neuron variable references with explicit ones (#604)
  • Dynamic and typed (#607)
  • Fused event generation and postsynaptic spike-like events (#609)
  • Single PSM code string (#612)

Bug fixes

  • PyGeNN only really works with precision set to float (#289)
  • Refine global - register -global transfers (#55)
  • Avoiding creating unused variables enhancement (#47)
  • PyGeNN doesn't correctly handle neuron variables with delay slots (#393)
  • assign_external_pointer overrides should use explicitly sized integer types (#288)
  • Repeat of spike-like-event conditions in synapse code flawed (#379)
  • Dangerous conflict potential of user and system code (#385)
  • Accessing queued pre and postsynaptic weight update model variables (#402)
  • Linker-imposed model complexity limit on Windows (#408)
  • Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416)
  • Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566)
  • Presynaptic Synapse Variable undefined in Event Threshold Condition (#594)

GeNN 4.9.0

11 Oct 10:33
a915552
Compare
Choose a tag to compare

Release Notes for GeNN 4.9.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.8.1 release.
It is intended as the last release for GeNN 4.X.X.
Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 5.

User Side Changes

  1. Implemented pygenn.GeNNModel.unload to manually unload GeNN models to improve control in scenarios such as parameter sweeping where multiple PyGeNN models need to be instantiated (#581).
  2. Added Extra Global Parameter references to custom updates (see Defining Custom Updates, Defining your own custom update model and Extra Global Parameter references (#583).
  3. Expose $(num_pre), $(num_post), $(num_batches) to all user code strings (#576)

Bug fixes

  1. Fixed handling of indices specified as sequences types other than numpy arrays in pygenn.SynapseGroup.set_sparse_connections (#597).
  2. Fixed bug in CUDA constant cache estimation bug which could cause nvLink errors in models with learning rules which required previous spike times (#589).
  3. Fixed longstanding issue with setuptools that meant PyGeNN sometimes had to be built twice to obtain a functional version. Massive thanks to @erolm-a for contributing this fix (#591).

Optimisations

  1. Reduced the number of layers and generally optimised Docker image. Massive thanks to @bdevans for his work on this (#601).

GeNN 4.8.1

21 Apr 15:41
ef69e59
Compare
Choose a tag to compare

Release Notes for GeNN v4.8.1

This release fixes a number of issues found in the 4.8.0 release and also includes some optimisation which could be very beneficial for some classes of model.

Bug fixes

  1. Fixed bug relating to merging populations with variable references pointing to variables with different access duplication modes (#557).
  2. Fixed infinite loop that could occur in code generator if a bracket was missed calling a GeNN function in a code snippet (#559).
  3. Fixed bug that meant batched models which required previous spike times failed to compile (#565).
  4. Fixed bug with DLL-searching logic on Windows which meant CUDA backend failed to load on some systems (#579).
  5. Fixed a number of corner cases in the handling of VarAccessDuplication::SHARED_NEURON variables (#578).

Optimisations

  1. When building models with large numbers of populations using the CUDA backend, compile times could be very large. This was at least in part due to over-verbose error handling code being generated. CodeGenerator::CUDA::Preferences::generateSimpleErrorHandling enables the generation of much more minimal error-handling code and can speed up compilation by up to 10x (#554).
  2. Turned on multi-processor compilation option in Visual Studio solutions which speeds up compilation of GeNN by a significant amount (#555).
  3. Fusing postsynaptic models was previously overly-conservative meaning large, highly-connected models using a postsynaptic model with additional state variables would perform poorly. These checks have been relaxed and brought into line with those used for fusing pre and postsynaptic updates coming from weight update models (#567).

GeNN 4.8.0

31 Oct 13:09
5aa20a0
Compare
Choose a tag to compare

Release Notes for GeNN 4.8.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release.

User Side Changes

  1. Custom updates extended to work on SynapseMatrixWeight::KERNEL weight update model variables (#524).
  2. Custom updates extended to perform reduction operations across neurons as well as batches (#539).
  3. PyGeNN can now automatically find Visual Studio build tools using functionality in setuptools.msvc.msvc14_get_vc_env (#471)
  4. GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to @Stevinson , @jamesturner246 and @bdevans for their help on this (see the README for more information) (#548 and #550).

Bug fixes

  1. Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates (#520).
  2. Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created (#522).
  3. Correctly substitute 0 for $(batch) when using single-threaded CPU backend (#523).
  4. Fixed issues building PyGeNN with Visual Studio 2017 (#533).
  5. Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed (#547).
  6. Fixed longstanding bug in the gen_input_structured tool -- used by some userprojects -- where data was written outside of array bounds (#551).
  7. Fixed issue with debug mode of genn-buildmodel.bat when used with single-threaded CPU backend (#551).
  8. Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated (#540).

GeNN 4.7.1

29 Apr 10:17
Compare
Choose a tag to compare

Release Notes for GeNN v4.7.1

This release fixes a plethora of issues found in the 4.7.0 release and also includes an optimisation which could be very beneficial for some classes of model.

Bug fixes

  1. Fixed issue meaning that manual changes to max synaptic row length (via SynapseGroup::setMaxConnections) were not detected and model might not be rebuilt. Additionally, reduce the strictness of checks in SynapseGroup::setMaxConnections and SynapseGroup::setMaxSourceConnections so maximum synaptic row and column lengths can be overridden when sparse connectivity initialisation snippets are in use as long as overriding values are larger than those provided by snippet (#515).
  2. Fixed issue preventing PyGeNN being built on Python 2.7 (#510)
  3. Fixed issue meaning that inSyn, denDelayInSyn and revInSynOutSyn variables were not properly zeroed during initialisation (or reinitialisation) of batched models (#509).
  4. Fixed issue where initialization code for synapse groups could be incorrectly merged (#508).
  5. Fixed issue when using custom updates on batched neuron group variables (#507).
  6. Fixed issue in spike recording system where some permutations of kernel and neuron population size would result in memory corruption (#502).
  7. Fixed (long-standing) issue where LLDB wasn't correctly invoked when running genn-buildmodel.sh -d on Mac (#518).
  8. Fixed issue where sparse initialisation kernels weren't correctly generated if they were only required to initialise custom updates (#517).

Optimisations

  1. Using synapse dynamics with sparse connectivity previously had very high memory requirements and poor performance. Both issues have been solved with a new algorithm (#511).

GeNN v4.7.0

11 Feb 17:23
c86cefe
Compare
Choose a tag to compare

Release Notes for GeNN v4.7.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release.

User Side Changes

  1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (#484).
  2. Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (#478).
  3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the $(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (#479).
  4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL (#476).
  5. Neuron code can now sample the binomial distribution using $(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (#498).
  6. In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths (#500).

Bug fixes:

  1. Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity (#489, #491).
  2. Fixed issue where, if $(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created (#494).
  3. Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated (#489).

GeNN 4.6.0

02 Nov 13:27
9d43b7c
Compare
Choose a tag to compare

Release Notes for GeNN v4.6.0

This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN.
It also includes a number of bug fixes that have been identified since the 4.5.1 release.

User Side Changes

  1. As well as performing arbitrary updates and calculating transposes of weight update model variables, custom updates can now be used to implement 'reductions' so, for example, duplicated variables can be summed across model batches (#447, #449).
  2. Previously, to connect a synapse group to a postsynaptic neuron's additional input variable, a custom postsynaptic model had to be used. SynapseGroup::setPSTargetVar and pygenn.SynapseGroup.ps_target_var can now be used to set the target variable of any synapse group (#458).
  3. Previously, weight update model pre and postsynaptic updates and variables got duplicated in the neuron kernel. This was very inefficient and these can now be 'fused' together by setting ModelSpec::setFusePrePostWeightUpdateModels (#461).
  4. PyGeNN now shares a version with GeNN itself and this will be accessible via pygenn.__version__ (#472).
  5. The names of populations and variables are now validated to prevent code with invalid variable names being generated (#443,#448).
  6. As well as being able to read the current spikes via the pygenn.NeuronGroup.current_spikes property, they can now also be set (#445).
  7. Spike-like events were previously not exposed to PyGeNN. These can now be pushed and pulled via pygenn.NeuronGroup.pull_spike_events_from_device, pygenn.NeuronGroup.push_spike_events_to_device, pygenn.NeuronGroup.pull_current_spike_events_from_device and pygenn.NeuronGroup.push_current_spike_events_to_device; and accessed via pygenn.NeuronGroup.current_spike_events (#469).
  8. Added additional error handling to prevent properties of pygenn.GeNNModel that can only be set before the model was built being set afterwards (#464).
  9. Variable references can now reference custom update variables (#446).
  10. Updated the default parameters used in the MBody1 example to be more sensible (#473).

Bug fixes:

  1. Fixed an issue that was preventing genn-buildmodel.sh correctly handling paths with spaces (#444)
  2. Fix multiple issues with sparse synapse index narrowing (#460)
  3. Fixed issue where, if GeNN is run in a locale where , is used for decimal point, some generated code was incorrectly formatted (#468).
  4. Fixed several small issues preventing GeNN from building on GCC 5 Visual C++ 2017 (#462)

GeNN 4.5.1

22 Jul 12:20
4e1d00e
Compare
Choose a tag to compare

Release Notes for GeNN v4.5.1 (PyGeNN 0.4.6)

This release fixes several small issues found in the 4.5.0 release.

Bug fixes:

  1. Fixed cause of the warnings about memory leaks which were generated when sparse connectivity initialisation snippets were defined in PyGeNN (#438)
  2. Fixed bug in model change detection which resulted in memory usage estimate increasing every time the model subsequently changed (#440)
  3. Fixed several bugs effecting the implementation of custom update models in CUDA and OpenCL (#439)

GeNN 4.5.0

15 Jul 11:31
1d46a69
Compare
Choose a tag to compare

Release Notes for GeNN v4.5.0 (PyGeNN 0.4.5)

This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN.
It also includes a number of bug fixes that have been identified since the 4.4.0 release.

User Side Changes

  1. When performing inference on datasets, batching helps fill the GPU and improve performance. This could be previously achieved using "master" and "slave" synapse populations but this didn't scale well. Models can now be automatically batched using ModelSpec::setBatchSize or pygenn.genn_model.GeNNModel.batch_size (#392).
  2. As well as more typical neuron, weight update, postsynaptic and current source models, you can now define custom update models which define a process which can be applied to any variable in the model. These can be used for e.g. resetting state variables or implementing optimisers for gradient-based learning (#405).
  3. Model compilation and CUDA block size optimisation could be rather slow in previous versions. More work is still required in this area but, code will now only be re-generated if the model has actually changed and block sizes will only be re-optimised for modules which have changed. Rebuilding can be forced with the -f flag to genn-buildmodel or the force_rebuild flag to pygenn.GeNNModel.build (#427, #430).
  4. Binary PyGeNN wheels are now always built with Python 3 (#401).
  5. To aid debugging, debug versions of PyGeNN can now be built (#396).
  6. OpenCL performance on AMD devices is improved - this has only been tested on a Radeon RX 5700 XT so any feedback from users with other devices would be much appreciated (#390).
  7. Exceptions raised by GeNN are now correctly passed through PyGeNN to Python (#433).
  8. Spike times (and spike-like event times) can now be accessed, pushed and pulled from PyGeNN (see pygenn.genn_groups.NeuronGroup.spike_times, pygenn.genn_groups.NeuronGroup.push_spike_times_to_device and pygenn.genn_groups.NeuronGroup.pull_spike_times_from_device ) (#432)
  9. On models where postsynaptic merging isn't enabled, the postsynaptic input current from a synapse group can now be accessed from PyGeNN via pygenn.genn_groups.SynapseGroup.in_syn; and pushed and pulled with pygenn.genn_groups.SynapseGroup.push_in_syn_to_device and pygenn.genn_groups.SynapseGroup.pull_in_syn_from_device respectively (#432).
  10. Accessing extra global parameters from PyGeNN was previously rather cumbersome. Now, you don't need to manually pass a size to e.g. pygenn.genn_groups.NeuronGroup.pull_extra_global_param_from_device and, if you are using non-pointer extra global parameters, you no longer need to call e.g. pygenn.genn_groups.NeuronGroup.set_extra_global_param before loading your model (#415).

Bug fixes:

  1. cudaFree was incorrectly called twice on zero-copy variables, causing crashes on exit (#395)
  2. Build in Izhikevich neurons incorrectly used auto-refractory mechanism, limiting their maximum firing rate (#404)
  3. On Windows, 64-bit version of compiler is now always used (#407)
  4. Fixed issues with CUDA 9.0 and 9.1 introduced in v4.4.0 release (#412)
  5. Fixed race condition relating to accessing previous spike times (#414)
  6. Fixed bug in column-wise connectivity initialisation (#419)
  7. Fixed issue with binomialInverseCDF function (used for calculating the maximum row length of probabilistic connectivity) which could fail when using some parameter combinations (#426)