deepchem.nn package

Submodules

deepchem.nn.activations module

Activations for models.

Copied over from Keras.

deepchem.nn.activations.elu(x, alpha=1.0)[source]
deepchem.nn.activations.get(identifier)[source]
deepchem.nn.activations.get_from_module(identifier, module_params, module_name, instantiate=False, kwargs=None)[source]

Retrieves a class of function member of a module.

Parameters:
  • identifier (the object to retrieve. It could be specified) – by name (as a string), or by dict. In any other case, identifier itself will be returned without any changes.
  • module_params (the members of a module) – (e.g. the output of globals()).
  • module_name (string; the name of the target module. Only used) – to format error messages.
  • instantiate (whether to instantiate the returned object) – (if it’s a class).
  • kwargs (a dictionary of keyword arguments to pass to the) – class constructor if instantiate is True.
Returns:

Return type:

The target object.

Raises:

ValueError: if the identifier cannot be found.

deepchem.nn.activations.hard_sigmoid(x)[source]
deepchem.nn.activations.linear(x)[source]
deepchem.nn.activations.relu(x, alpha=0.0, max_value=None)[source]
deepchem.nn.activations.selu(x)[source]
deepchem.nn.activations.sigmoid(x)[source]
deepchem.nn.activations.softmax(x)[source]
deepchem.nn.activations.softplus(x)[source]
deepchem.nn.activations.softsign(x)[source]
deepchem.nn.activations.tanh(x)[source]

deepchem.nn.constraints module

Place constraints on models.

class deepchem.nn.constraints.Constraint[source]

Bases: object

class deepchem.nn.constraints.MaxNorm(m=2, axis=0)[source]

Bases: deepchem.nn.constraints.Constraint

MaxNorm weight constraint.

Constrains the weights incident to each hidden unit to have a norm less than or equal to a desired value.

Parameters:
  • m (the maximum norm for the incoming weights.) –
  • axis (integer, axis along which to calculate weight norms.) – For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,).
  • References (#) –
class deepchem.nn.constraints.NonNeg[source]

Bases: deepchem.nn.constraints.Constraint

Constrains the weights to be non-negative.

class deepchem.nn.constraints.UnitNorm(axis=0)[source]

Bases: deepchem.nn.constraints.Constraint

Constrains the weights incident to each hidden unit to have unit norm.

# Arguments
axis: integer, axis along which to calculate weight norms.
For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Convolution2D layer with dim_ordering=”tf”, the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth).
deepchem.nn.constraints.get(identifier, kwargs=None)[source]
deepchem.nn.constraints.maxnorm

alias of MaxNorm

deepchem.nn.constraints.nonneg

alias of NonNeg

deepchem.nn.constraints.unitnorm

alias of UnitNorm

deepchem.nn.copy module

Copies Classes from keras to remove dependency.

Most of this code is copied over from Keras. Hoping to use as a staging area while we remove our Keras dependency.

class deepchem.nn.copy.BatchNormalization(epsilon=0.001, mode=0, axis=-1, momentum=0.99, beta_init='zero', gamma_init='one', gamma_regularizer=None, beta_regularizer=None, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Batch normalization layer (Ioffe and Szegedy, 2014).

Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.

Parameters:
  • epsilon (small float > 0. Fuzz parameter.) –
  • mode (integer, 0, 1 or 2.) –
    • 0: feature-wise normalization.
      Each feature map in the input will be normalized separately. The axis on which to normalize is specified by the axis argument. During training we use per-batch statistics to normalize the data, and during testing we use running averages computed during the training phase.
    • 1: sample-wise normalization. This mode assumes a 2D input.
    • 2: feature-wise normalization, like mode 0, but
      using per-batch statistics to normalize the data during both testing and training.
  • axis (integer, axis along which to normalize in mode 0. For instance,) – if your input tensor has shape (samples, channels, rows, cols), set axis to 1 to normalize per feature map (channels axis).
  • momentum (momentum in the computation of the) – exponential average of the mean and standard deviation of the data, for feature-wise normalization.
  • beta_init (name of initialization function for shift parameter, or) – alternatively, TensorFlow function to use for weights initialization.
  • gamma_init (name of initialization function for scale parameter, or) – alternatively, TensorFlow function to use for weights initialization.
  • gamma_regularizer (instance of WeightRegularizer) – (eg. L1 or L2 regularization), applied to the gamma vector.
  • beta_regularizer (instance of WeightRegularizer,) – applied to the beta vector.
  • shape (Output) –
  • Use the keyword argument input_shape (Arbitrary.) –
  • of integers, does not include the samples axis) ((tuple) –
  • using this layer as the first layer in a model. (when) –
  • shape
  • shape as input. (Same) –
  • References
__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build(input_shape)[source]
call(x)[source]
class deepchem.nn.copy.Dense(output_dim, input_dim, init='glorot_uniform', activation='relu', bias=True, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Just your regular densely-connected NN layer.

TODO(rbharath): Make this functional in deepchem

Example:

>>> import deepchem as dc
>>> # as first layer in a sequential model:
>>> model = dc.models.Sequential()
>>> model.add(dc.nn.Input(shape=16))
>>> model.add(dc.nn.Dense(32))
>>> # now the model will take as input arrays of shape (*, 16)
>>> # and output arrays of shape (*, 32)
>>> # this is equivalent to the above:
>>> model = dc.models.Sequential()
>>> model.add(dc.nn.Input(shape=16))
>>> model.add(dc.nn.Dense(32))
Parameters:
  • output_dim (int > 0.) –
  • init (name of initialization function for the weights of the layer) –
  • activation (name of activation function to use) – (see [activations](../activations.md)). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • W_regularizer ((eg. L1 or L2 regularization), applied to the main weights matrix.) –
  • b_regularizer (instance of regularize applied to the bias.) –
  • activity_regularizer (instance of [ActivityRegularizer](../regularizers.md),) – applied to the network output.
  • bias (whether to include a bias) – (i.e. make the layer affine rather than linear).
  • input_dim (dimensionality of the input (integer). This argument) – (or alternatively, the keyword argument input_shape) is required when using this layer as the first layer in a model.
  • Input shape (#) – nD tensor with shape: (nb_samples, ..., input_dim). The most common situation would be a 2D input with shape (nb_samples, input_dim).
  • Output shape (#) – nD tensor with shape: (nb_samples, ..., output_dim). For instance, for a 2D input with shape (nb_samples, input_dim), the output would have shape (nb_samples, output_dim).
add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
call(x)

This is where the layer’s logic lives.

Parameters:x (input tensor, or list/tuple of input tensors.) –
Returns:
Return type:A tensor or list/tuple of tensors.
class deepchem.nn.copy.Dropout(p, seed=None, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Applies Dropout to the input.

Dropout consists in randomly setting a fraction p of input units to 0 at each update during training time, which helps prevent overfitting.

Parameters:
__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
call(x)[source]
deepchem.nn.copy.Input(shape=None, batch_shape=None, name=None, dtype=tf.float32)[source]

Input() is used to create a placeholder input

Parameters:
  • shape (A shape tuple (integer), not including the batch size.) – For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors.
  • name (An optional name string for the layer.) – Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn’t provided.
  • dtype (The data type expected by the input, as a string) – (float32, float64, int32...)
  • TODO(rbharath) (#) –
  • Example
  • # this is a logistic regression in Keras (>>>) –
  • a = dc.nn.Input(shape=(32,)) (>>>) –
  • b = dc.nn.Dense(16)(a) (>>>) –
  • model = dc.nn.FunctionalModel(input=a, output=b) (>>>) –
class deepchem.nn.copy.InputLayer(input_shape=None, batch_input_shape=None, input_dtype=None, name=None)[source]

Bases: deepchem.nn.copy.Layer

Layer to be used as an entry point into a graph.

Create its a placeholder tensor (pass arguments input_shape or batch_input_shape as well as input_dtype).

Parameters:
  • input_shape (Shape tuple, not including the batch axis.) –
  • batch_input_shape (Shape tuple, including the batch axis.) –
  • input_dtype (Datatype of the input.) –
  • name (Name of the layer (string).) –
add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
call(x)

This is where the layer’s logic lives.

Parameters:x (input tensor, or list/tuple of input tensors.) –
Returns:
Return type:A tensor or list/tuple of tensors.
class deepchem.nn.copy.Layer(**kwargs)[source]

Bases: object

Abstract base layer class.

name

String, must be unique within a model.

trainable

Boolean, whether the layer weights

will be updated during training.

uses_learning_phase

Whether any operation

of the layer uses model_ops.in_training_phase() or model_ops.in_test_phase().

input_shape

Shape tuple. Provided for convenience,

but note that there may be cases in which this attribute is ill-defined (e.g. a shared layer with multiple input shapes), in which case requesting input_shape will raise an Exception. Prefer using layer.get_input_shape_for(input_shape),

output_shape

Shape tuple. See above.

input, output

Input/output tensor(s). Note that if the layer is used

more than once (shared layer), this is ill-defined and will raise an exception. In such cases, use

call(x): Where the layer's logic lives.
__call__(x): Wrapper around the layer logic (`call`).
If x is a tensor:
  • Connect current layer with last layer from tensor:
  • Add layer to tensor history

If layer is not built:

__call__(x)[source]

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)[source]

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)[source]

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
call(x)[source]

This is where the layer’s logic lives.

Parameters:x (input tensor, or list/tuple of input tensors.) –
Returns:
Return type:A tensor or list/tuple of tensors.
deepchem.nn.copy.object_list_uid(object_list)[source]
deepchem.nn.copy.to_list(x)[source]

This normalizes a list/tensor into a list.

If a tensor is passed, we return a list of size 1 containing the tensor.

deepchem.nn.initializations module

Ops for tensor initialization

deepchem.nn.initializations.get(identifier, **kwargs)[source]
deepchem.nn.initializations.get_fans(shape)[source]
deepchem.nn.initializations.glorot_normal(shape, name=None)[source]

Glorot normal variance scaling initializer.

# References
Glorot & Bengio, AISTATS 2010
deepchem.nn.initializations.glorot_uniform(shape, name=None)[source]
deepchem.nn.initializations.he_normal(shape, name=None)[source]

He normal variance scaling initializer.

# References
He et al., http://arxiv.org/abs/1502.01852
deepchem.nn.initializations.he_uniform(shape, name=None)[source]

He uniform variance scaling initializer.

deepchem.nn.initializations.identity(shape, scale=1, name=None)[source]
deepchem.nn.initializations.lecun_uniform(shape, name=None)[source]

LeCun uniform variance scaling initializer.

# References
LeCun 98, Efficient Backprop, http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf
deepchem.nn.initializations.normal(shape, scale=0.05, name=None)[source]
deepchem.nn.initializations.one(shape, name=None)[source]
deepchem.nn.initializations.orthogonal(shape, scale=1.1, name=None)[source]

Orthogonal initializer.

# References
Saxe et al., http://arxiv.org/abs/1312.6120
deepchem.nn.initializations.uniform(shape, scale=0.05, name=None)[source]
deepchem.nn.initializations.zero(shape, name=None)[source]

deepchem.nn.layers module

Custom Keras Layers.

class deepchem.nn.layers.AttnLSTMEmbedding(n_test, n_support, n_feat, max_depth, init='glorot_uniform', activation='linear', dropout=None, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Implements AttnLSTM as in matching networks paper.

References: Matching Networks for One Shot Learning https://arxiv.org/pdf/1606.04080v1.pdf

Order Matters: Sequence to sequence for sets https://arxiv.org/abs/1511.06391

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
call(x_xp, mask=None)[source]

Execute this layer on input tensors.

Parameters:x_xp (list) – List of two tensors (X, Xp). X should be of shape (n_test, n_feat) and Xp should be of shape (n_support, n_feat) where n_test is the size of the test set, n_support that of the support set, and n_feat is the number of per-atom features.
Returns:Returns two tensors of same shape as input. Namely the output shape will be [(n_test, n_feat), (n_support, n_feat)]
Return type:list
compute_mask(x, mask=None)[source]
get_output_shape_for(input_shape)[source]

Returns the output shape. Same as input_shape.

Parameters:input_shape (list) – Will be of form [(n_test, n_feat), (n_support, n_feat)]
Returns:Of same shape as input [(n_test, n_feat), (n_support, n_feat)]
Return type:list
class deepchem.nn.layers.DAGGather(n_graph_feat=30, n_outputs=30, max_atoms=50, layer_sizes=[100], init='glorot_uniform', activation='relu', dropout=None, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Gather layer of DAG model for each molecule, graph outputs are summed and input into another NN

DAGgraph_step(batch_inputs, W_list, b_list)[source]
__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]

“Construct internal trainable weights.

call(x, mask=None)[source]

Execute this layer on input tensors.

x = [graph_features, membership]

Parameters:x (tf.Tensor) – Tensor of each atom’s graph features
Returns:outputs – Tensor of each molecule’s features
Return type:tf.Tensor
class deepchem.nn.layers.DAGLayer(n_graph_feat=30, n_atom_feat=75, max_atoms=50, layer_sizes=[100], init='glorot_uniform', activation='relu', dropout=None, batch_size=64, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

” Main layer of DAG model For a molecule with n atoms, n different graphs are generated and run through The final outputs of each graph become the graph features of corresponding atom, which will be summed and put into another network in DAGGather Layer

DAGgraph_step(batch_inputs, W_list, b_list)[source]
__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]

“Construct internal trainable weights.

call(x, mask=None)[source]

Execute this layer on input tensors.

x = [atom_features, parents, calculation_orders, calculation_masks, membership, n_atoms]

Parameters:
  • x (list) – list of Tensors of form described above.
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

outputs – Tensor of atom features, of shape (n_atoms, n_graph_feat)

Return type:

tf.Tensor

class deepchem.nn.layers.DTNNEmbedding(n_embedding=30, periodic_table_length=30, init='glorot_uniform', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Generate embeddings for all atoms in the batch

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]
call(x)[source]

Execute this layer on input tensors.

Parameters:x (Tensor) – 1D tensor of length n_atoms (atomic number)
Returns:Of shape (n_atoms, n_embedding), where n_embedding is number of atom features
Return type:tf.Tensor
class deepchem.nn.layers.DTNNGather(n_embedding=30, n_outputs=100, layer_sizes=[100], output_activation=True, init='glorot_uniform', activation='tanh', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Map the atomic features into molecular properties and sum

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]
call(x)[source]

Execute this layer on input tensors.

Parameters:x (list of Tensor) – should be [embedding tensor of molecules, of shape (batch_size*max_n_atoms*n_embedding), mask tensor of molecules, of shape (batch_size*max_n_atoms)]
Returns:Of shape (batch_size)
Return type:list of tf.Tensor
class deepchem.nn.layers.DTNNStep(n_embedding=30, n_distance=100, n_hidden=60, init='glorot_uniform', activation='tanh', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

A convolution step that merge in distance and atom info of all other atoms into current atom.

model based on https://arxiv.org/abs/1609.08259

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]
call(x)[source]

Execute this layer on input tensors.

Parameters:x (list of Tensor) – should be [atom_features: n_atoms*n_embedding, distance_matrix: n_pairs*n_distance, atom_membership: n_atoms distance_membership_i: n_pairs, distance_membership_j: n_pairs, ]
Returns:new embeddings for atoms, same shape as x[0]
Return type:tf.Tensor
class deepchem.nn.layers.GraphConv(nb_filter, n_atom_features, init='glorot_uniform', activation='linear', dropout=None, max_deg=10, min_deg=0, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

“Performs a graph convolution.

Note this layer expects the presence of placeholders defined by GraphTopology and expects that they follow the ordering provided by GraphTopology.get_input_placeholders().

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]

“Construct internal trainable weights.

n_atom_features should provide the number of features per atom.

Parameters:n_atom_features (int) – Number of features provied per atom.
call(x, mask=None)[source]

Execute this layer on input tensors.

This layer is meant to be executed on a Graph. So x is expected to be a list of placeholders, with the first placeholder the list of atom_features (learned or input) at this level, the second the deg_slice, the third the membership, and the remaining the deg_adj_lists.

Visually

x = [atom_features, deg_slice, membership, deg_adj_list placeholders...]

Parameters:
  • x (list) – list of Tensors of form described above.
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

atom_features – Of shape (n_atoms, nb_filter)

Return type:

tf.Tensor

get_output_shape_for(input_shape)[source]

Output tensor shape produced by this layer.

class deepchem.nn.layers.GraphGather(batch_size, activation='linear', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Gathers information for each molecule.

The various graph convolution operations expect as input a tensor atom_features of shape (n_atoms, n_feat). However, we train on batches of molecules at a time. The GraphTopology object groups a list of molecules into the atom_features tensor. The tensorial operations are done on this tensor, but at the end, the atoms need to be grouped back into molecules. This layer takes care of that operation.

Note this layer expects the presence of placeholders defined by GraphTopology and expects that they follow the ordering provided by GraphTopology.get_input_placeholders().

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build(input_shape)[source]

Nothing needed (no learnable weights).

call(x, mask=None)[source]

Execute this layer on input tensors.

This layer is meant to be executed on a Graph. So x is expected to be a list of placeholders, with the first placeholder the list of atom_features (learned or input) at this level, the second the deg_slice, the third the membership, and the remaining the deg_adj_lists.

Visually

x = [atom_features, deg_slice, membership, deg_adj_list placeholders...]

Parameters:
  • x (list) – list of Tensors of form described above.
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

Of shape (batch_size, n_feat), where n_feat is number of atom_features

Return type:

tf.Tensor

get_output_shape_for(input_shape)[source]

Output tensor shape produced by this layer.

class deepchem.nn.layers.GraphPool(max_deg=10, min_deg=0, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Performs a pooling operation over an arbitrary graph.

Performs a max pool over the feature vectors for an atom and its neighbors in bond-graph. Returns a tensor of the same size as the input.

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build(input_shape)[source]

Nothing needed (no learnable weights).

call(x, mask=None)[source]

Execute this layer on input tensors.

This layer is meant to be executed on a Graph. So x is expected to be a list of placeholders, with the first placeholder the list of atom_features (learned or input) at this level, the second the deg_slice, the third the membership, and the remaining the deg_adj_lists.

Visually

x = [atom_features, deg_slice, membership, deg_adj_list placeholders...]

Parameters:
  • x (list) – list of Tensors of form described above.
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

Of shape (n_atoms, n_feat), where n_feat is number of atom_features

Return type:

tf.Tensor

get_output_shape_for(input_shape)[source]

Output tensor shape produced by this layer.

class deepchem.nn.layers.LSTMStep(output_dim, input_dim, init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one', activation='tanh', inner_activation='hard_sigmoid', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

LSTM whose call is a single step in the LSTM.

This layer exists because the Keras LSTM layer is intrinsically linked to an RNN with sequence inputs, and here, we will not be using sequence inputs, but rather we generate a sequence of inputs using the intermediate outputs of the LSTM, and so will require step by step operation of the lstm

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]
call(x_states, mask=None)[source]
get_initial_states(input_shape)[source]
get_output_shape_for(input_shape)[source]
class deepchem.nn.layers.ResiLSTMEmbedding(n_test, n_support, n_feat, max_depth, init='glorot_uniform', activation='linear', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

Embeds its inputs using an LSTM layer.

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]

Builds this layer.

call(argument, mask=None)[source]

Execute this layer on input tensors.

Parameters:argument (list) – List of two tensors (X, Xp). X should be of shape (n_test, n_feat) and Xp should be of shape (n_support, n_feat) where n_test is the size of the test set, n_support that of the support set, and n_feat is the number of per-atom features.
Returns:Returns two tensors of same shape as input. Namely the output shape will be [(n_test, n_feat), (n_support, n_feat)]
Return type:list
compute_mask(x, mask=None)[source]
get_output_shape_for(input_shape)[source]

Returns the output shape. Same as input_shape.

Parameters:input_shape (list) – Will be of form [(n_test, n_feat), (n_support, n_feat)]
Returns:Of same shape as input [(n_test, n_feat), (n_support, n_feat)]
Return type:list
deepchem.nn.layers.affine(x, W, b)[source]
deepchem.nn.layers.cos(x, y)[source]
deepchem.nn.layers.graph_conv(atoms, deg_adj_lists, deg_slice, max_deg, min_deg, W_list, b_list)[source]

Core tensorflow function implementing graph convolution

Parameters:
  • atoms (tf.Tensor) – Should be of shape (n_atoms, n_feat)
  • deg_adj_lists (list) – Of length (max_deg+1-min_deg). The deg-th element is a list of adjacency lists for atoms of degree deg.
  • deg_slice (tf.Tensor) – Of shape (max_deg+1-min_deg,2). Explained in GraphTopology.
  • max_deg (int) – Maximum degree of atoms in molecules.
  • min_deg (int) – Minimum degree of atoms in molecules
  • W_list (list) – List of learnable weights for convolution.
  • b_list (list) – List of learnable biases for convolution.
Returns:

Of shape (n_atoms, n_feat)

Return type:

tf.Tensor

deepchem.nn.layers.graph_gather(atoms, membership_placeholder, batch_size)[source]
Parameters:
  • atoms (tf.Tensor) – Of shape (n_atoms, n_feat)
  • membership_placeholder (tf.Placeholder) – Of shape (n_atoms,). Molecule each atom belongs to.
  • batch_size (int) – Batch size for deep model.
Returns:

Of shape (batch_size, n_feat)

Return type:

tf.Tensor

deepchem.nn.layers.graph_pool(atoms, deg_adj_lists, deg_slice, max_deg, min_deg)[source]
Parameters:
  • atoms (tf.Tensor) – Of shape (n_atoms, n_feat)
  • deg_adj_lists (list) – Of length (max_deg+1-min_deg). The deg-th element is a list of adjacency lists for atoms of degree deg.
  • deg_slice (tf.Tensor) – Of shape (max_deg+1-min_deg,2). Explained in GraphTopology.
  • max_deg (int) – Maximum degree of atoms in molecules.
  • min_deg (int) – Minimum degree of atoms in molecules
Returns:

Of shape (batch_size, n_feat)

Return type:

tf.Tensor

deepchem.nn.layers.sum_neigh(atoms, deg_adj_lists, max_deg)[source]

Store the summed atoms by degree

deepchem.nn.layers.tf_affine(x, vm, scope)[source]

deepchem.nn.model_ops module

Ops for graph construction.

Large amounts of code borrowed from Keras. Will try to incorporate into DeepChem properly.

deepchem.nn.model_ops.add_bias(tensor, init=None, name=None)[source]

Add a bias term to a tensor.

Parameters:
  • tensor (tf.Tensor) – Variable tensor.
  • init (float) – Bias initializer. Defaults to zero.
  • name (str) – Name for this op. Defaults to tensor.op.name.
Returns:

A biased tensor with the same shape as the input tensor.

Return type:

tf.Tensor

deepchem.nn.model_ops.binary_crossentropy(output, target, from_logits=False)[source]

Binary crossentropy between an output tensor and a target tensor.

# Arguments

output: A tensor. target: A tensor with the same shape as output. from_logits: Whether output is expected to be a logits tensor.

By default, we consider that output encodes a probability distribution.
# Returns
A tensor.
deepchem.nn.model_ops.cast_to_floatx(x)[source]

Cast a Numpy array to the default Keras float type.

Parameters:x (Numpy array.) –
Returns:
Return type:The same Numpy array, cast to its new type.
deepchem.nn.model_ops.categorical_crossentropy(output, target, from_logits=False)[source]

Categorical crossentropy between an output tensor and a target tensor, where the target is a tensor of the same shape as the output.

# TODO(rbharath): Should probably swap this over to tf mode.

deepchem.nn.model_ops.clip(x, min_value, max_value)[source]

Element-wise value clipping.

Returns:
Return type:A tensor.
deepchem.nn.model_ops.concatenate(tensors, axis=-1)[source]

Concatenates a list of tensors alongside the specified axis.

Returns:
Return type:A tensor.
deepchem.nn.model_ops.cosine_distances(test, support)[source]

Computes pairwise cosine distances between provided tensors

Parameters:
  • test (tf.Tensor) – Of shape (n_test, n_feat)
  • support (tf.Tensor) – Of shape (n_support, n_feat)
Returns:

Of shape (n_test, n_support)

Return type:

tf.Tensor

deepchem.nn.model_ops.dot(x, y)[source]

Multiplies 2 tensors (and/or variables) and returns a tensor. When attempting to multiply a ND tensor with a ND tensor, it reproduces the Theano behavior. (e.g. (2, 3).(4, 3, 5) = (2, 4, 5))

Parameters:
  • x (Tensor or variable.) –
  • y (Tensor or variable.) –
Returns:

Return type:

A tensor, dot product of x and y.

deepchem.nn.model_ops.dropout(tensor, dropout_prob, training=True, training_only=True)[source]

Random dropout.

This implementation supports “always-on” dropout (training_only=False), which can be used to calculate model uncertainty. See Gal and Ghahramani, http://arxiv.org/abs/1506.02142.

NOTE(user): To simplify the implementation, I have chosen not to reverse
the scaling that occurs in tf.nn.dropout when using dropout during inference. This shouldn’t be an issue since the activations will be scaled by the same constant in both training and inference. This means that there are no training-time differences between networks that use dropout during inference and those that do not.
Parameters:
  • tensor (tf.Tensor) – Input tensor.
  • dropout_prob (float) – Float giving dropout probability for weights (NOT keep probability).
  • training_only (bool) – Boolean. If True (standard dropout), apply dropout only during training. If False, apply dropout during inference as well.
Returns:

A tensor with the same shape as the input tensor.

Return type:

tf.Tensor

deepchem.nn.model_ops.elu(x, alpha=1.0)[source]

Exponential linear unit.

Parameters:
  • x (A tensor or variable to compute the activation function for.) –
  • alpha (A scalar, slope of positive section.) –
Returns:

Return type:

A tensor.

deepchem.nn.model_ops.epsilon()[source]

Returns the value of the fuzz factor used in numeric expressions.

Returns:
Return type:A float.
deepchem.nn.model_ops.euclidean_distance(test, support, max_dist_sq=20)[source]

Computes pairwise euclidean distances between provided tensors

TODO(rbharath): BROKEN! THIS DOESN’T WORK!

Parameters:
  • test (tf.Tensor) – Of shape (n_test, n_feat)
  • support (tf.Tensor) – Of shape (n_support, n_feat)
  • max_dist_sq (float, optional) – Maximum pairwise distance allowed.
Returns:

Of shape (n_test, n_support)

Return type:

tf.Tensor

deepchem.nn.model_ops.fully_connected_layer(tensor, size=None, weight_init=None, bias_init=None, name=None)[source]

Fully connected layer.

Parameters:
  • tensor (tf.Tensor) – Input tensor.
  • size (int) – Number of output nodes for this layer.
  • weight_init (float) – Weight initializer.
  • bias_init (float) – Bias initializer.
  • name (str) – Name for this op. Defaults to ‘fully_connected’.
Returns:

A new tensor representing the output of the fully connected layer.

Return type:

tf.Tensor

Raises:

ValueError – If input tensor is not 2D.

deepchem.nn.model_ops.get_dtype(x)[source]

Returns the dtype of a Keras tensor or variable, as a string.

Parameters:x (Tensor or variable.) –
Returns:
Return type:String, dtype of x.
deepchem.nn.model_ops.get_ndim(x)[source]

Returns the number of axes in a tensor, as an integer.

Parameters:x (Tensor or variable.) –
Returns:
Return type:Integer (scalar), number of axes.
deepchem.nn.model_ops.get_uid(prefix='')[source]

Provides a unique UID given a string prefix.

Parameters:prefix (string.) –
Returns:
Return type:An integer.
deepchem.nn.model_ops.hard_sigmoid(x)[source]

Segment-wise linear approximation of sigmoid. Faster than sigmoid. Returns 0. if x < -2.5, 1. if x > 2.5. In -2.5 <= x <= 2.5, returns 0.2 * x + 0.5.

Parameters:x (A tensor or variable.) –
Returns:
Return type:A tensor.
deepchem.nn.model_ops.in_train_phase(x, alt)[source]

Selects x in train phase, and alt otherwise. Note that alt should have the same shape as x.

Returns:
Return type:Either x or alt based on K.learning_phase.
deepchem.nn.model_ops.int_shape(x)[source]

Returns the shape of a Keras tensor or a Keras variable as a tuple of integers or None entries.

Parameters:x (Tensor or variable.) –
Returns:
Return type:A tuple of integers (or None entries).
deepchem.nn.model_ops.l2_normalize(x, axis)[source]

Normalizes a tensor wrt the L2 norm alongside the specified axis.

Parameters:
  • x (input tensor.) –
  • axis (axis along which to perform normalization.) –
Returns:

Return type:

A tensor.

deepchem.nn.model_ops.learning_phase()[source]

Returns the learning phase flag.

The learning phase flag is a bool tensor (0 = test, 1 = train) to be passed as input to any Keras function that uses a different behavior at train time and test time.

deepchem.nn.model_ops.logits(features, num_classes=2, weight_init=None, bias_init=None, dropout_prob=None, name=None)[source]

Create a logits tensor for a single classification task.

You almost certainly don’t want dropout on there – it’s like randomly setting the (unscaled) probability of a target class to 0.5.

Parameters:
  • features – A 2D tensor with dimensions batch_size x num_features.
  • num_classes – Number of classes for each task.
  • weight_init – Weight initializer.
  • bias_init – Bias initializer.
  • dropout_prob – Float giving dropout probability for weights (NOT keep probability).
  • name – Name for this op.
Returns:

A logits tensor with shape batch_size x num_classes.

deepchem.nn.model_ops.lrelu(alpha=0.01)[source]

Create a leaky rectified linear unit function.

This function returns a new function that implements the LReLU with a specified alpha. The returned value can be used as an activation function in network layers.

Parameters:alpha (float) – the slope of the function when x<0
Returns:
Return type:a function f(x) that returns alpha*x when x<0, and x when x>0.
deepchem.nn.model_ops.max(x, axis=None, keepdims=False)[source]

Maximum value in a tensor.

Parameters:
  • x (A tensor or variable.) –
  • axis (An integer, the axis to find maximum values.) –
  • keepdims (A boolean, whether to keep the dimensions or not.) – If keepdims is False, the rank of the tensor is reduced by 1. If keepdims is True, the reduced dimension is retained with length 1.
Returns:

Return type:

A tensor with maximum values of x.

deepchem.nn.model_ops.mean(x, axis=None, keepdims=False)[source]

Mean of a tensor, alongside the specified axis.

Parameters:
  • x (A tensor or variable.) –
  • axis (A list of integer. Axes to compute the mean.) –
  • keepdims (A boolean, whether to keep the dimensions or not.) – If keepdims is False, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is True, the reduced dimensions are retained with length 1.
Returns:

Return type:

A tensor with the mean of elements of x.

deepchem.nn.model_ops.moving_average_update(variable, value, momentum)[source]
deepchem.nn.model_ops.multitask_logits(features, num_tasks, num_classes=2, weight_init=None, bias_init=None, dropout_prob=None, name=None)[source]

Create a logit tensor for each classification task.

Parameters:
  • features – A 2D tensor with dimensions batch_size x num_features.
  • num_tasks – Number of classification tasks.
  • num_classes – Number of classes for each task.
  • weight_init – Weight initializer.
  • bias_init – Bias initializer.
  • dropout_prob – Float giving dropout probability for weights (NOT keep probability).
  • name – Name for this op. Defaults to ‘multitask_logits’.
Returns:

A list of logit tensors; one for each classification task.

deepchem.nn.model_ops.normalize_batch_in_training(x, gamma, beta, reduction_axes, epsilon=0.001)[source]

Computes mean and std for batch then apply batch_normalization on batch.

Returns:
Return type:A tuple length of 3, (normalized_tensor, mean, variance).
deepchem.nn.model_ops.ones(shape, dtype=None, name=None)[source]

Instantiates an all-ones tensor variable and returns it.

Parameters:
  • shape (Tuple of integers, shape of returned Keras variable.) –
  • dtype (Tensorflow dtype) –
  • name (String, name of returned Keras variable.) –
Returns:

Return type:

A Keras variable, filled with 1.0.

deepchem.nn.model_ops.optimizer(optimizer='adam', learning_rate=0.001, momentum=0.9)[source]

Create model optimizer.

Parameters:
  • optimizer (str, optional) – Name of optimizer
  • learning_rate (float, optional) – Learning rate for algorithm
  • momentum (float, optional) – Momentum rate
Returns:

Return type:

A training Optimizer.

Raises:

NotImplementedError – If an unsupported optimizer is requested.

deepchem.nn.model_ops.random_normal_variable(shape, mean, scale, dtype=tf.float32, name=None, seed=None)[source]

Instantiates an Keras variable filled with samples drawn from a normal distribution and returns it.

Parameters:
  • shape (Tuple of integers, shape of returned Keras variable.) –
  • mean (Float, mean of the normal distribution.) –
  • scale (Float, standard deviation of the normal distribution.) –
  • dtype (Tensorflow dtype) –
  • name (String, name of returned Keras variable.) –
  • seed (Integer, random seed.) –
Returns:

Return type:

A tf.Variable, filled with drawn samples.

deepchem.nn.model_ops.random_uniform_variable(shape, low, high, dtype=tf.float32, name=None, seed=None)[source]

Instantiates an variable filled with samples drawn from a uniform distribution and returns it.

Parameters:
  • shape (Tuple of integers, shape of returned variable.) –
  • low (Float, lower boundary of the output inteval.) –
  • high (Float, upper boundary of the output interval.) –
  • dtype (Tensorflow dtype) –
  • name (String, name of returned variable.) –
  • seed (Integer, random seed.) –
Returns:

Return type:

A tf.Variable, filled with drawn samples.

deepchem.nn.model_ops.relu(x, alpha=0.0, max_value=None)[source]

Rectified linear unit. With default values, it returns element-wise max(x, 0).

Parameters:
  • x (A tensor or variable.) –
  • alpha (A scalar, slope of negative section (default=`0.`).) –
  • max_value (Saturation threshold.) –
Returns:

Return type:

A tensor.

deepchem.nn.model_ops.selu(x)[source]

Scaled Exponential Linear unit.

Parameters:x (A tensor or variable.) –
Returns:
Return type:A tensor.

References

deepchem.nn.model_ops.softmax_N(tensor, name=None)[source]

Apply softmax across last dimension of a tensor.

Parameters:
  • tensor – Input tensor.
  • name – Name for this op. If None, defaults to ‘softmax_N’.
Returns:

A tensor with softmax-normalized values on the last dimension.

deepchem.nn.model_ops.sparse_categorical_crossentropy(output, target, from_logits=False)[source]

Categorical crossentropy between an output tensor and a target tensor, where the target is an integer tensor.

deepchem.nn.model_ops.sqrt(x)[source]

Element-wise square root.

Parameters:x (input tensor.) –
Returns:
Return type:A tensor.
deepchem.nn.model_ops.sum(x, axis=None, keepdims=False)[source]

Sum of the values in a tensor, alongside the specified axis.

Parameters:
  • x (A tensor or variable.) –
  • axis (An integer, the axis to sum over.) –
  • keepdims (A boolean, whether to keep the dimensions or not.) – If keepdims is False, the rank of the tensor is reduced by 1. If keepdims is True, the reduced dimension is retained with length 1.
Returns:

Return type:

A tensor with sum of x.

deepchem.nn.model_ops.switch(condition, then_expression, else_expression)[source]

Switches between two operations depending on a scalar value (int or bool). Note that both then_expression and else_expression should be symbolic tensors of the same shape.

Parameters:
  • condition (scalar tensor.) –
  • then_expression (either a tensor, or a callable that returns a tensor.) –
  • else_expression (either a tensor, or a callable that returns a tensor.) –
Returns:

Return type:

The selected tensor.

deepchem.nn.model_ops.var(x, axis=None, keepdims=False)[source]

Variance of a tensor, alongside the specified axis.

Parameters:
  • x (A tensor or variable.) –
  • axis (An integer, the axis to compute the variance.) –
  • keepdims (A boolean, whether to keep the dimensions or not.) – If keepdims is False, the rank of the tensor is reduced by 1. If keepdims is True, the reduced dimension is retained with length 1.
Returns:

Return type:

A tensor with the variance of elements of x.

deepchem.nn.model_ops.weight_decay(penalty_type, penalty)[source]

Add weight decay.

Parameters:model – TensorflowGraph.
Returns:A scalar tensor containing the weight decay cost.
Raises:NotImplementedError – If an unsupported penalty type is requested.
deepchem.nn.model_ops.zeros(shape, dtype=tf.float32, name=None)[source]

Instantiates an all-zeros variable and returns it.

Parameters:
  • shape (Tuple of integers, shape of returned Keras variable) –
  • dtype (Tensorflow dtype) –
  • name (String, name of returned Keras variable) –
Returns:

Return type:

A variable (including Keras metadata), filled with 0.0.

deepchem.nn.objectives module

Ops for objectives

Code borrowed from Keras.

deepchem.nn.objectives.KLD(y_true, y_pred)
deepchem.nn.objectives.MAE(y_true, y_pred)
deepchem.nn.objectives.MAPE(y_true, y_pred)
deepchem.nn.objectives.MSE(y_true, y_pred)
deepchem.nn.objectives.MSLE(y_true, y_pred)
deepchem.nn.objectives.binary_crossentropy(y_true, y_pred)[source]
deepchem.nn.objectives.categorical_crossentropy(y_true, y_pred)[source]
deepchem.nn.objectives.cosine(y_true, y_pred)
deepchem.nn.objectives.cosine_proximity(y_true, y_pred)[source]
deepchem.nn.objectives.hinge(y_true, y_pred)[source]
deepchem.nn.objectives.kld(y_true, y_pred)
deepchem.nn.objectives.kullback_leibler_divergence(y_true, y_pred)[source]
deepchem.nn.objectives.mae(y_true, y_pred)
deepchem.nn.objectives.mape(y_true, y_pred)
deepchem.nn.objectives.mean_absolute_error(y_true, y_pred)[source]
deepchem.nn.objectives.mean_absolute_percentage_error(y_true, y_pred)[source]
deepchem.nn.objectives.mean_squared_error(y_true, y_pred)[source]
deepchem.nn.objectives.mean_squared_logarithmic_error(y_true, y_pred)[source]
deepchem.nn.objectives.mse(y_true, y_pred)
deepchem.nn.objectives.msle(y_true, y_pred)
deepchem.nn.objectives.poisson(y_true, y_pred)[source]
deepchem.nn.objectives.sparse_categorical_crossentropy(y_true, y_pred)[source]
deepchem.nn.objectives.squared_hinge(y_true, y_pred)[source]

deepchem.nn.regularizers module

Ops for regularizers

Code borrowed from Keras.

deepchem.nn.regularizers.ActivityRegularizer

alias of L1L2Regularizer

class deepchem.nn.regularizers.L1L2Regularizer(l1=0.0, l2=0.0)[source]

Bases: deepchem.nn.regularizers.Regularizer

Regularizer for L1 and L2 regularization.

# Arguments
l1: Float; L1 regularization factor. l2: Float; L2 regularization factor.
class deepchem.nn.regularizers.Regularizer[source]

Bases: object

deepchem.nn.regularizers.WeightRegularizer

alias of L1L2Regularizer

deepchem.nn.regularizers.activity_l1(l=0.01)[source]
deepchem.nn.regularizers.activity_l1l2(l1=0.01, l2=0.01)[source]
deepchem.nn.regularizers.activity_l2(l=0.01)[source]
deepchem.nn.regularizers.l1(l=0.01)[source]
deepchem.nn.regularizers.l1l2(l1=0.01, l2=0.01)[source]
deepchem.nn.regularizers.l2(l=0.01)[source]

deepchem.nn.weave_layers module

Created on Thu Mar 30 14:02:04 2017

@author: michael

class deepchem.nn.weave_layers.AlternateWeaveGather(batch_size, n_input=128, gaussian_expand=False, init='glorot_uniform', activation='tanh', epsilon=0.001, momentum=0.99, **kwargs)[source]

Bases: deepchem.nn.weave_layers.WeaveGather

Alternate implementation of weave gather layer corresponding to AlternateWeaveLayer

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()
call(x, mask=None)[source]

Execute this layer on input tensors.

x = [atom_features, atom_split]

Parameters:
  • x (list) – Tensors as listed above
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

outputs – Tensor of molecular features

Return type:

Tensor

gaussian_histogram(x)
class deepchem.nn.weave_layers.AlternateWeaveLayer(max_atoms, n_atom_input_feat=75, n_pair_input_feat=14, n_atom_output_feat=50, n_pair_output_feat=50, n_hidden_AA=50, n_hidden_PA=50, n_hidden_AP=50, n_hidden_PP=50, update_pair=True, init='glorot_uniform', activation='relu', dropout=None, **kwargs)[source]

Bases: deepchem.nn.weave_layers.WeaveLayer

Alternate implementation of weave module same variables, different graph structures

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()

“Construct internal trainable weights.

call(x, mask=None)[source]

Execute this layer on input tensors.

x = [atom_features, pair_features, pair_split, atom_split, atom_to_pair]

Parameters:
  • x (list) – list of Tensors of form described above.
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

  • A (Tensor) – Tensor of atom_features
  • P (Tensor) – Tensor of pair_features

class deepchem.nn.weave_layers.WeaveConcat(batch_size, n_atom_input_feat=50, n_output=128, init='glorot_uniform', activation='tanh', **kwargs)[source]

Bases: deepchem.nn.copy.Layer

” Concat a batch of molecules into a batch of atoms

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]

“Construct internal trainable weights.

call(x, mask=None)[source]

Execute this layer on input tensors.

x = [atom_features, atom_mask]

Parameters:
  • x (list) – Tensors as listed above
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

outputs – Tensor of concatenated atom features

Return type:

Tensor

class deepchem.nn.weave_layers.WeaveGather(batch_size, n_input=128, gaussian_expand=False, init='glorot_uniform', activation='tanh', epsilon=0.001, momentum=0.99, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

” Gather layer of Weave model a batch of normalized atom features go through a hidden layer, then summed to form molecular features

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]
call(x, mask=None)[source]

Execute this layer on input tensors.

x = [atom_features, membership]

Parameters:
  • x (list) – Tensors as listed above
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

outputs – Tensor of molecular features

Return type:

Tensor

gaussian_histogram(x)[source]
class deepchem.nn.weave_layers.WeaveLayer(max_atoms, n_atom_input_feat=75, n_pair_input_feat=14, n_atom_output_feat=50, n_pair_output_feat=50, n_hidden_AA=50, n_hidden_PA=50, n_hidden_AP=50, n_hidden_PP=50, update_pair=True, init='glorot_uniform', activation='relu', dropout=None, **kwargs)[source]

Bases: deepchem.nn.copy.Layer

” Main layer of Weave model For each molecule, atom features and pair features are recombined to generate new atom(pair) features

Detailed structure and explanations: https://arxiv.org/abs/1603.00856

__call__(x)

Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.

add_loss(losses, inputs=None)

Adds losses to model.

add_weight(shape, initializer, regularizer=None, name=None)

Adds a weight variable to the layer.

Parameters:
  • shape (The shape tuple of the weight.) –
  • initializer (An Initializer instance (callable).) –
  • regularizer (An optional Regularizer instance.) –
build()[source]

“Construct internal trainable weights.

call(x, mask=None)[source]

Execute this layer on input tensors.

x = [atom_features, pair_features, atom_mask, pair_mask]

Parameters:
  • x (list) – list of Tensors of form described above.
  • mask (bool, optional) – Ignored. Present only to shadow superclass call() method.
Returns:

  • A (Tensor) – Tensor of atom_features
  • P (Tensor) – Tensor of pair_features

Module contents

Imports a number of useful deep learning primitives into one place.