MDRNN

MDGRU

class mdgru.model.mdrnn.mdgru.MDRNN(inputarr, dropout, dimensions, kw)[source]

Bases: object

MDRNN class originally designed to handle the sum of cGRU computations resulting in one MDGRU.

_defaults contains initial values for most class attributes. :param use_dropconnect_x: Flag if dropconnect regularization should be applied to input weights :param kw: dict containing the following options.

  • use_dropconnect_x [default: True] Should dropconnect be applied to the input?
  • use_dropconnect_h [default: True] Should DropConnect be applied to the state?
  • swap_memory [default: True] Dont switch gpu ram with cpu ram to allow for larger volumes to allow for faster processing
  • return_cgru_results [default: False] If provided, returns cgru results as channels instead of a sum over all cgrus
  • use_static_rnn [default: False]
  • no_avg_pool [default: True]
  • filter_size_x [default: [7]] filter sizes for each dimension of the input
  • filter_size_h [default: [7]] filter sizes for each dimension of the state
  • crnn_activation [default: function tanh at 0x7fc7772d2730]
  • legacy_cgru_addition [default: False] results in worse weight initialization, only use if you know what you are doing!
  • crnn_class [default: class ‘mdgru.model.crnn.cgru.CGRUCell’]
  • strides [default: None]
  • name [default: mdgru]
  • num_hidden [default: 100]
Parameters:
  • use_dropconnect_h – Flag if dropconnect regularization should be applied to state weights
  • swap_memory – Flag that trades slower computation with less memory consumption by swapping memory to CPU RAM
  • return_cgru_results – Flag if instead of a sum, the individual cgru results should be returned
  • use_static_rnn – Static rnn graph creation, not recommended
  • no_avg_pool – Flag that defines if instead of average pooling convolutions with strides should be used
  • filter_size_x – Dimensions of filters for the input (the current time dimension is ignored in each cRNN)
  • filter_size_h – Dimensions of filters for the state (the current time dimension is ignored in each cRNN)
  • crnn_activation – Activation function for the candidate / state / output in each cRNN
  • legacy_cgru_addition – Activating old implementation of crnn sum, for backwards compatibility
  • crnn_class – Which cRNN class should be used (CGRUCell for MDGRU)
  • strides – Defines strides to be applied along each dimension
  • inputarr – Input data, needs to be in shape [batch, spatialdim1…spatialdimn, channel]
  • dropout – Dropoutrate to be applied
  • dimensions – which dimensions should be processed with a cRNN (by default all of them)
  • num_hidden – How many hidden units / channels does this MDRNN have
  • name – What should be the name of this MDRNN
_defaults = {'crnn_activation': <function tanh at 0x7fc7772d2730>, 'crnn_class': <class 'mdgru.model.crnn.cgru.CGRUCell'>, 'filter_size_h': {'value': [7], 'help': 'filter sizes for each dimension of the state', 'type': <class 'int'>}, 'filter_size_x': {'value': [7], 'help': 'filter sizes for each dimension of the input', 'type': <class 'int'>}, 'legacy_cgru_addition': {'value': False, 'help': 'results in worse weight initialization, only use if you know what you are doing!'}, 'name': 'mdgru', 'no_avg_pool': True, 'num_hidden': 100, 'return_cgru_results': {'value': False, 'help': 'If provided, returns cgru results as channels instead of a sum over all cgrus', 'name': 'dontsumcgrus'}, 'strides': None, 'swap_memory': {'value': True, 'help': 'Dont switch gpu ram with cpu ram to allow for larger volumes to allow for faster processing', 'invert_meaning': 'dont_'}, 'use_dropconnect_h': {'value': True, 'help': 'Should DropConnect be applied to the state?', 'invert_meaning': 'dont_'}, 'use_dropconnect_x': {'value': True, 'help': 'Should dropconnect be applied to the input?', 'invert_meaning': 'dont_'}, 'use_static_rnn': False}
add_cgru(minput, myshape, tempshape, fsx=[7, 7], fsh=[7, 7], strides=None)[source]

Convenience method to unify the cRNN computation, gets input and shape and returns the cRNNs results.

Module contents

class mdgru.model.mdrnn.MDGRUNet(data, target, dropout, kw)[source]

Bases: object

Convenience class combining attributes to be used for multiple MDRNN and voxel-wise fully connected layers.

Parameters:kw (dict containing the following options.) –
  • add_e_bn [default: False]
  • resmdgru [default: False] Add a residual connection around an MDGRU block
  • vwfc_activation [default: function tanh at 0x7fc7772d2730]
_defaults = {'add_e_bn': False, 'resmdgru': {'value': False, 'help': 'Add a residual connection around an MDGRU block'}, 'vwfc_activation': <function tanh at 0x7fc7772d2730>}
mdgru_bb(inp, dropout, num_hidden, num_output, noactivation=False, name=None, **kw)[source]

Convenience function to combine a MDRNN layer with a voxel-wise fully connected layer.

Parameters:
  • inp – input data
  • dropout – dropout rate
  • num_hidden – number of hidden units, output units of the MDRNN
  • num_output – number of output units of the voxel-wise fully connected layer (Can be None -> no voxel-wise fully connected layer)
  • noactivation – Flag to disable activation of voxel-wise fully connected layer
  • name – Name for this particular MDRNN + vw fully connected layer
  • kw – Arguments for MDRNN and the vw fully connected layer (can override this class’ attributes)
Returns:

Output of the voxelwise fully connected layer and MDRNN mix