Activations for models.
Copied over from Keras.
deepchem.nn.activations.
get_from_module
(identifier, module_params, module_name, instantiate=False, kwargs=None)[source]¶Retrieves a class of function member of a module.
Parameters: 


Returns:  
Return type:  The target object. 
Raises:  ValueError: if the identifier cannot be found. 
Place constraints on models.
deepchem.nn.constraints.
MaxNorm
(m=2, axis=0)[source]¶Bases: deepchem.nn.constraints.Constraint
MaxNorm weight constraint.
Constrains the weights incident to each hidden unit to have a norm less than or equal to a desired value.
Parameters: 


deepchem.nn.constraints.
NonNeg
[source]¶Bases: deepchem.nn.constraints.Constraint
Constrains the weights to be nonnegative.
deepchem.nn.constraints.
UnitNorm
(axis=0)[source]¶Bases: deepchem.nn.constraints.Constraint
Constrains the weights incident to each hidden unit to have unit norm.
Copies Classes from keras to remove dependency.
Most of this code is copied over from Keras. Hoping to use as a staging area while we remove our Keras dependency.
deepchem.nn.copy.
BatchNormalization
(epsilon=0.001, mode=0, axis=1, momentum=0.99, beta_init='zero', gamma_init='one', gamma_regularizer=None, beta_regularizer=None, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Batch normalization layer (Ioffe and Szegedy, 2014).
Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
Parameters: 


__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.copy.
Dense
(output_dim, input_dim, init='glorot_uniform', activation='relu', bias=True, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Just your regular denselyconnected NN layer.
TODO(rbharath): Make this functional in deepchem
Example:
>>> import deepchem as dc
>>> # as first layer in a sequential model:
>>> model = dc.models.Sequential()
>>> model.add(dc.nn.Input(shape=16))
>>> model.add(dc.nn.Dense(32))
>>> # now the model will take as input arrays of shape (*, 16)
>>> # and output arrays of shape (*, 32)
>>> # this is equivalent to the above:
>>> model = dc.models.Sequential()
>>> model.add(dc.nn.Input(shape=16))
>>> model.add(dc.nn.Dense(32))
Parameters: 


add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x)¶This is where the layer’s logic lives.
Parameters:  x (input tensor, or list/tuple of input tensors.) – 

Returns:  
Return type:  A tensor or list/tuple of tensors. 
deepchem.nn.copy.
Dropout
(p, seed=None, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Applies Dropout to the input.
Dropout consists in randomly setting a fraction p of input units to 0 at each update during training time, which helps prevent overfitting.
Parameters: 


__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.copy.
Input
(shape=None, batch_shape=None, name=None, dtype=tf.float32)[source]¶Input() is used to create a placeholder input
Parameters: 


deepchem.nn.copy.
InputLayer
(input_shape=None, batch_input_shape=None, input_dtype=None, name=None)[source]¶Bases: deepchem.nn.copy.Layer
Layer to be used as an entry point into a graph.
Create its a placeholder tensor (pass arguments input_shape or batch_input_shape as well as input_dtype).
Parameters: 


add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x)¶This is where the layer’s logic lives.
Parameters:  x (input tensor, or list/tuple of input tensors.) – 

Returns:  
Return type:  A tensor or list/tuple of tensors. 
deepchem.nn.copy.
Layer
(**kwargs)[source]¶Bases: object
Abstract base layer class.
name
¶String, must be unique within a model.
trainable
¶Boolean, whether the layer weights
will be updated during training.
uses_learning_phase
¶Whether any operation
of the layer uses model_ops.in_training_phase() or model_ops.in_test_phase().
input_shape
¶Shape tuple. Provided for convenience,
but note that there may be cases in which this attribute is illdefined (e.g. a shared layer with multiple input shapes), in which case requesting input_shape will raise an Exception. Prefer using layer.get_input_shape_for(input_shape),
output_shape
¶Shape tuple. See above.
input, output
Input/output tensor(s). Note that if the layer is used
more than once (shared layer), this is illdefined and will raise an exception. In such cases, use
call(x): Where the layer's logic lives.
__call__(x): Wrapper around the layer logic (`call`).
If layer is not built:
__call__
(x)[source]¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
Ops for tensor initialization
deepchem.nn.initializations.
glorot_normal
(shape, name=None)[source]¶Glorot normal variance scaling initializer.
deepchem.nn.initializations.
he_normal
(shape, name=None)[source]¶He normal variance scaling initializer.
deepchem.nn.initializations.
he_uniform
(shape, name=None)[source]¶He uniform variance scaling initializer.
deepchem.nn.initializations.
lecun_uniform
(shape, name=None)[source]¶LeCun uniform variance scaling initializer.
deepchem.nn.initializations.
orthogonal
(shape, scale=1.1, name=None)[source]¶Orthogonal initializer.
Custom Keras Layers.
deepchem.nn.layers.
AttnLSTMEmbedding
(n_test, n_support, n_feat, max_depth, init='glorot_uniform', activation='linear', dropout=None, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Implements AttnLSTM as in matching networks paper.
References: Matching Networks for One Shot Learning https://arxiv.org/pdf/1606.04080v1.pdf
Order Matters: Sequence to sequence for sets https://arxiv.org/abs/1511.06391
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x_xp, mask=None)[source]¶Execute this layer on input tensors.
Parameters:  x_xp (list) – List of two tensors (X, Xp). X should be of shape (n_test, n_feat) and Xp should be of shape (n_support, n_feat) where n_test is the size of the test set, n_support that of the support set, and n_feat is the number of peratom features. 

Returns:  Returns two tensors of same shape as input. Namely the output shape will be [(n_test, n_feat), (n_support, n_feat)] 
Return type:  list 
deepchem.nn.layers.
DAGGather
(n_graph_feat=30, n_outputs=30, max_atoms=50, layer_sizes=[100], init='glorot_uniform', activation='relu', dropout=None, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Gather layer of DAG model for each molecule, graph outputs are summed and input into another NN
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.layers.
DAGLayer
(n_graph_feat=30, n_atom_feat=75, max_atoms=50, layer_sizes=[100], init='glorot_uniform', activation='relu', dropout=None, batch_size=64, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
” Main layer of DAG model For a molecule with n atoms, n different graphs are generated and run through The final outputs of each graph become the graph features of corresponding atom, which will be summed and put into another network in DAGGather Layer
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x, mask=None)[source]¶Execute this layer on input tensors.
x = [atom_features, parents, calculation_orders, calculation_masks, membership, n_atoms]
Parameters: 


Returns:  outputs – Tensor of atom features, of shape (n_atoms, n_graph_feat) 
Return type:  tf.Tensor 
deepchem.nn.layers.
DTNNEmbedding
(n_embedding=30, periodic_table_length=30, init='glorot_uniform', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Generate embeddings for all atoms in the batch
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.layers.
DTNNGather
(n_embedding=30, n_outputs=100, layer_sizes=[100], output_activation=True, init='glorot_uniform', activation='tanh', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Map the atomic features into molecular properties and sum
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.layers.
DTNNStep
(n_embedding=30, n_distance=100, n_hidden=60, init='glorot_uniform', activation='tanh', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
A convolution step that merge in distance and atom info of all other atoms into current atom.
model based on https://arxiv.org/abs/1609.08259
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x)[source]¶Execute this layer on input tensors.
Parameters:  x (list of Tensor) – should be [atom_features: n_atoms*n_embedding, distance_matrix: n_pairs*n_distance, atom_membership: n_atoms distance_membership_i: n_pairs, distance_membership_j: n_pairs, ] 

Returns:  new embeddings for atoms, same shape as x[0] 
Return type:  tf.Tensor 
deepchem.nn.layers.
GraphConv
(nb_filter, n_atom_features, init='glorot_uniform', activation='linear', dropout=None, max_deg=10, min_deg=0, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
“Performs a graph convolution.
Note this layer expects the presence of placeholders defined by GraphTopology and expects that they follow the ordering provided by GraphTopology.get_input_placeholders().
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


build
()[source]¶“Construct internal trainable weights.
n_atom_features should provide the number of features per atom.
Parameters:  n_atom_features (int) – Number of features provied per atom. 

call
(x, mask=None)[source]¶Execute this layer on input tensors.
This layer is meant to be executed on a Graph. So x is expected to be a list of placeholders, with the first placeholder the list of atom_features (learned or input) at this level, the second the deg_slice, the third the membership, and the remaining the deg_adj_lists.
Visually
x = [atom_features, deg_slice, membership, deg_adj_list placeholders...]
Parameters: 


Returns:  atom_features – Of shape (n_atoms, nb_filter) 
Return type:  tf.Tensor 
deepchem.nn.layers.
GraphGather
(batch_size, activation='linear', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Gathers information for each molecule.
The various graph convolution operations expect as input a tensor atom_features of shape (n_atoms, n_feat). However, we train on batches of molecules at a time. The GraphTopology object groups a list of molecules into the atom_features tensor. The tensorial operations are done on this tensor, but at the end, the atoms need to be grouped back into molecules. This layer takes care of that operation.
Note this layer expects the presence of placeholders defined by GraphTopology and expects that they follow the ordering provided by GraphTopology.get_input_placeholders().
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x, mask=None)[source]¶Execute this layer on input tensors.
This layer is meant to be executed on a Graph. So x is expected to be a list of placeholders, with the first placeholder the list of atom_features (learned or input) at this level, the second the deg_slice, the third the membership, and the remaining the deg_adj_lists.
Visually
x = [atom_features, deg_slice, membership, deg_adj_list placeholders...]
Parameters: 


Returns:  Of shape (batch_size, n_feat), where n_feat is number of atom_features 
Return type:  tf.Tensor 
deepchem.nn.layers.
GraphPool
(max_deg=10, min_deg=0, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Performs a pooling operation over an arbitrary graph.
Performs a max pool over the feature vectors for an atom and its neighbors in bondgraph. Returns a tensor of the same size as the input.
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x, mask=None)[source]¶Execute this layer on input tensors.
This layer is meant to be executed on a Graph. So x is expected to be a list of placeholders, with the first placeholder the list of atom_features (learned or input) at this level, the second the deg_slice, the third the membership, and the remaining the deg_adj_lists.
Visually
x = [atom_features, deg_slice, membership, deg_adj_list placeholders...]
Parameters: 


Returns:  Of shape (n_atoms, n_feat), where n_feat is number of atom_features 
Return type:  tf.Tensor 
deepchem.nn.layers.
LSTMStep
(output_dim, input_dim, init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one', activation='tanh', inner_activation='hard_sigmoid', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
LSTM whose call is a single step in the LSTM.
This layer exists because the Keras LSTM layer is intrinsically linked to an RNN with sequence inputs, and here, we will not be using sequence inputs, but rather we generate a sequence of inputs using the intermediate outputs of the LSTM, and so will require step by step operation of the lstm
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.layers.
ResiLSTMEmbedding
(n_test, n_support, n_feat, max_depth, init='glorot_uniform', activation='linear', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
Embeds its inputs using an LSTM layer.
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(argument, mask=None)[source]¶Execute this layer on input tensors.
Parameters:  argument (list) – List of two tensors (X, Xp). X should be of shape (n_test, n_feat) and Xp should be of shape (n_support, n_feat) where n_test is the size of the test set, n_support that of the support set, and n_feat is the number of peratom features. 

Returns:  Returns two tensors of same shape as input. Namely the output shape will be [(n_test, n_feat), (n_support, n_feat)] 
Return type:  list 
deepchem.nn.layers.
graph_conv
(atoms, deg_adj_lists, deg_slice, max_deg, min_deg, W_list, b_list)[source]¶Core tensorflow function implementing graph convolution
Parameters: 


Returns:  Of shape (n_atoms, n_feat) 
Return type:  tf.Tensor 
deepchem.nn.layers.
graph_gather
(atoms, membership_placeholder, batch_size)[source]¶Parameters: 


Returns:  Of shape (batch_size, n_feat) 
Return type:  tf.Tensor 
deepchem.nn.layers.
graph_pool
(atoms, deg_adj_lists, deg_slice, max_deg, min_deg)[source]¶Parameters: 


Returns:  Of shape (batch_size, n_feat) 
Return type:  tf.Tensor 
Ops for graph construction.
Large amounts of code borrowed from Keras. Will try to incorporate into DeepChem properly.
deepchem.nn.model_ops.
add_bias
(tensor, init=None, name=None)[source]¶Add a bias term to a tensor.
Parameters:  

Returns:  A biased tensor with the same shape as the input tensor. 
Return type:  tf.Tensor 
deepchem.nn.model_ops.
binary_crossentropy
(output, target, from_logits=False)[source]¶Binary crossentropy between an output tensor and a target tensor.
output: A tensor. target: A tensor with the same shape as output. from_logits: Whether output is expected to be a logits tensor.
By default, we consider that output encodes a probability distribution.
deepchem.nn.model_ops.
cast_to_floatx
(x)[source]¶Cast a Numpy array to the default Keras float type.
Parameters:  x (Numpy array.) – 

Returns:  
Return type:  The same Numpy array, cast to its new type. 
deepchem.nn.model_ops.
categorical_crossentropy
(output, target, from_logits=False)[source]¶Categorical crossentropy between an output tensor and a target tensor, where the target is a tensor of the same shape as the output.
# TODO(rbharath): Should probably swap this over to tf mode.
deepchem.nn.model_ops.
clip
(x, min_value, max_value)[source]¶Elementwise value clipping.
Returns:  

Return type:  A tensor. 
deepchem.nn.model_ops.
concatenate
(tensors, axis=1)[source]¶Concatenates a list of tensors alongside the specified axis.
Returns:  

Return type:  A tensor. 
deepchem.nn.model_ops.
cosine_distances
(test, support)[source]¶Computes pairwise cosine distances between provided tensors
Parameters: 


Returns:  Of shape (n_test, n_support) 
Return type:  tf.Tensor 
deepchem.nn.model_ops.
dot
(x, y)[source]¶Multiplies 2 tensors (and/or variables) and returns a tensor. When attempting to multiply a ND tensor with a ND tensor, it reproduces the Theano behavior. (e.g. (2, 3).(4, 3, 5) = (2, 4, 5))
Parameters: 


Returns:  
Return type:  A tensor, dot product of x and y. 
deepchem.nn.model_ops.
dropout
(tensor, dropout_prob, training=True, training_only=True)[source]¶Random dropout.
This implementation supports “alwayson” dropout (training_only=False), which can be used to calculate model uncertainty. See Gal and Ghahramani, http://arxiv.org/abs/1506.02142.
Parameters:  

Returns:  A tensor with the same shape as the input tensor. 
Return type:  tf.Tensor 
deepchem.nn.model_ops.
elu
(x, alpha=1.0)[source]¶Exponential linear unit.
Parameters: 


Returns:  
Return type:  A tensor. 
deepchem.nn.model_ops.
epsilon
()[source]¶Returns the value of the fuzz factor used in numeric expressions.
Returns:  

Return type:  A float. 
deepchem.nn.model_ops.
euclidean_distance
(test, support, max_dist_sq=20)[source]¶Computes pairwise euclidean distances between provided tensors
TODO(rbharath): BROKEN! THIS DOESN’T WORK!
Parameters: 


Returns:  Of shape (n_test, n_support) 
Return type:  tf.Tensor 
deepchem.nn.model_ops.
fully_connected_layer
(tensor, size=None, weight_init=None, bias_init=None, name=None)[source]¶Fully connected layer.
Parameters:  

Returns:  A new tensor representing the output of the fully connected layer. 
Return type:  tf.Tensor 
Raises: 

deepchem.nn.model_ops.
get_dtype
(x)[source]¶Returns the dtype of a Keras tensor or variable, as a string.
Parameters:  x (Tensor or variable.) – 

Returns:  
Return type:  String, dtype of x. 
deepchem.nn.model_ops.
get_ndim
(x)[source]¶Returns the number of axes in a tensor, as an integer.
Parameters:  x (Tensor or variable.) – 

Returns:  
Return type:  Integer (scalar), number of axes. 
deepchem.nn.model_ops.
get_uid
(prefix='')[source]¶Provides a unique UID given a string prefix.
Parameters:  prefix (string.) – 

Returns:  
Return type:  An integer. 
deepchem.nn.model_ops.
hard_sigmoid
(x)[source]¶Segmentwise linear approximation of sigmoid. Faster than sigmoid. Returns 0. if x < 2.5, 1. if x > 2.5. In 2.5 <= x <= 2.5, returns 0.2 * x + 0.5.
Parameters:  x (A tensor or variable.) – 

Returns:  
Return type:  A tensor. 
deepchem.nn.model_ops.
in_train_phase
(x, alt)[source]¶Selects x in train phase, and alt otherwise. Note that alt should have the same shape as x.
Returns:  

Return type:  Either x or alt based on K.learning_phase. 
deepchem.nn.model_ops.
int_shape
(x)[source]¶Returns the shape of a Keras tensor or a Keras variable as a tuple of integers or None entries.
Parameters:  x (Tensor or variable.) – 

Returns:  
Return type:  A tuple of integers (or None entries). 
deepchem.nn.model_ops.
l2_normalize
(x, axis)[source]¶Normalizes a tensor wrt the L2 norm alongside the specified axis.
Parameters: 


Returns:  
Return type:  A tensor. 
deepchem.nn.model_ops.
learning_phase
()[source]¶Returns the learning phase flag.
The learning phase flag is a bool tensor (0 = test, 1 = train) to be passed as input to any Keras function that uses a different behavior at train time and test time.
deepchem.nn.model_ops.
logits
(features, num_classes=2, weight_init=None, bias_init=None, dropout_prob=None, name=None)[source]¶Create a logits tensor for a single classification task.
You almost certainly don’t want dropout on there – it’s like randomly setting the (unscaled) probability of a target class to 0.5.
Parameters: 


Returns:  A logits tensor with shape batch_size x num_classes. 
deepchem.nn.model_ops.
lrelu
(alpha=0.01)[source]¶Create a leaky rectified linear unit function.
This function returns a new function that implements the LReLU with a specified alpha. The returned value can be used as an activation function in network layers.
Parameters:  alpha (float) – the slope of the function when x<0 

Returns:  
Return type:  a function f(x) that returns alpha*x when x<0, and x when x>0. 
deepchem.nn.model_ops.
max
(x, axis=None, keepdims=False)[source]¶Maximum value in a tensor.
Parameters: 


Returns:  
Return type:  A tensor with maximum values of x. 
deepchem.nn.model_ops.
mean
(x, axis=None, keepdims=False)[source]¶Mean of a tensor, alongside the specified axis.
Parameters: 


Returns:  
Return type:  A tensor with the mean of elements of x. 
deepchem.nn.model_ops.
multitask_logits
(features, num_tasks, num_classes=2, weight_init=None, bias_init=None, dropout_prob=None, name=None)[source]¶Create a logit tensor for each classification task.
Parameters: 


Returns:  A list of logit tensors; one for each classification task. 
deepchem.nn.model_ops.
normalize_batch_in_training
(x, gamma, beta, reduction_axes, epsilon=0.001)[source]¶Computes mean and std for batch then apply batch_normalization on batch.
Returns:  

Return type:  A tuple length of 3, (normalized_tensor, mean, variance). 
deepchem.nn.model_ops.
ones
(shape, dtype=None, name=None)[source]¶Instantiates an allones tensor variable and returns it.
Parameters: 


Returns:  
Return type:  A Keras variable, filled with 1.0. 
deepchem.nn.model_ops.
optimizer
(optimizer='adam', learning_rate=0.001, momentum=0.9)[source]¶Create model optimizer.
Parameters: 


Returns:  
Return type:  A training Optimizer. 
Raises: 

deepchem.nn.model_ops.
random_normal_variable
(shape, mean, scale, dtype=tf.float32, name=None, seed=None)[source]¶Instantiates an Keras variable filled with samples drawn from a normal distribution and returns it.
Parameters: 


Returns:  
Return type:  A tf.Variable, filled with drawn samples. 
deepchem.nn.model_ops.
random_uniform_variable
(shape, low, high, dtype=tf.float32, name=None, seed=None)[source]¶Instantiates an variable filled with samples drawn from a uniform distribution and returns it.
Parameters: 


Returns:  
Return type:  A tf.Variable, filled with drawn samples. 
deepchem.nn.model_ops.
relu
(x, alpha=0.0, max_value=None)[source]¶Rectified linear unit. With default values, it returns elementwise max(x, 0).
Parameters: 


Returns:  
Return type:  A tensor. 
deepchem.nn.model_ops.
selu
(x)[source]¶Scaled Exponential Linear unit.
Parameters:  x (A tensor or variable.) – 

Returns:  
Return type:  A tensor. 
References
deepchem.nn.model_ops.
softmax_N
(tensor, name=None)[source]¶Apply softmax across last dimension of a tensor.
Parameters: 


Returns:  A tensor with softmaxnormalized values on the last dimension. 
deepchem.nn.model_ops.
sparse_categorical_crossentropy
(output, target, from_logits=False)[source]¶Categorical crossentropy between an output tensor and a target tensor, where the target is an integer tensor.
deepchem.nn.model_ops.
sqrt
(x)[source]¶Elementwise square root.
Parameters:  x (input tensor.) – 

Returns:  
Return type:  A tensor. 
deepchem.nn.model_ops.
sum
(x, axis=None, keepdims=False)[source]¶Sum of the values in a tensor, alongside the specified axis.
Parameters: 


Returns:  
Return type:  A tensor with sum of x. 
deepchem.nn.model_ops.
switch
(condition, then_expression, else_expression)[source]¶Switches between two operations depending on a scalar value (int or bool). Note that both then_expression and else_expression should be symbolic tensors of the same shape.
Parameters: 


Returns:  
Return type:  The selected tensor. 
deepchem.nn.model_ops.
var
(x, axis=None, keepdims=False)[source]¶Variance of a tensor, alongside the specified axis.
Parameters: 


Returns:  
Return type:  A tensor with the variance of elements of x. 
deepchem.nn.model_ops.
weight_decay
(penalty_type, penalty)[source]¶Add weight decay.
Parameters:  model – TensorflowGraph. 

Returns:  A scalar tensor containing the weight decay cost. 
Raises:  NotImplementedError –
If an unsupported penalty type is requested. 
deepchem.nn.model_ops.
zeros
(shape, dtype=tf.float32, name=None)[source]¶Instantiates an allzeros variable and returns it.
Parameters: 


Returns:  
Return type:  A variable (including Keras metadata), filled with 0.0. 
Ops for objectives
Code borrowed from Keras.
deepchem.nn.objectives.
KLD
(y_true, y_pred)¶deepchem.nn.objectives.
MAE
(y_true, y_pred)¶deepchem.nn.objectives.
MAPE
(y_true, y_pred)¶deepchem.nn.objectives.
MSE
(y_true, y_pred)¶deepchem.nn.objectives.
MSLE
(y_true, y_pred)¶deepchem.nn.objectives.
cosine
(y_true, y_pred)¶deepchem.nn.objectives.
kld
(y_true, y_pred)¶deepchem.nn.objectives.
mae
(y_true, y_pred)¶deepchem.nn.objectives.
mape
(y_true, y_pred)¶deepchem.nn.objectives.
mse
(y_true, y_pred)¶deepchem.nn.objectives.
msle
(y_true, y_pred)¶Ops for regularizers
Code borrowed from Keras.
deepchem.nn.regularizers.
ActivityRegularizer
¶alias of L1L2Regularizer
deepchem.nn.regularizers.
L1L2Regularizer
(l1=0.0, l2=0.0)[source]¶Bases: deepchem.nn.regularizers.Regularizer
Regularizer for L1 and L2 regularization.
deepchem.nn.regularizers.
WeightRegularizer
¶alias of L1L2Regularizer
Created on Thu Mar 30 14:02:04 2017
@author: michael
deepchem.nn.weave_layers.
AlternateWeaveGather
(batch_size, n_input=128, gaussian_expand=False, init='glorot_uniform', activation='tanh', epsilon=0.001, momentum=0.99, **kwargs)[source]¶Bases: deepchem.nn.weave_layers.WeaveGather
Alternate implementation of weave gather layer corresponding to AlternateWeaveLayer
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


build
()¶call
(x, mask=None)[source]¶Execute this layer on input tensors.
x = [atom_features, atom_split]
Parameters: 


Returns:  outputs – Tensor of molecular features 
Return type:  Tensor 
gaussian_histogram
(x)¶deepchem.nn.weave_layers.
AlternateWeaveLayer
(max_atoms, n_atom_input_feat=75, n_pair_input_feat=14, n_atom_output_feat=50, n_pair_output_feat=50, n_hidden_AA=50, n_hidden_PA=50, n_hidden_AP=50, n_hidden_PP=50, update_pair=True, init='glorot_uniform', activation='relu', dropout=None, **kwargs)[source]¶Bases: deepchem.nn.weave_layers.WeaveLayer
Alternate implementation of weave module same variables, different graph structures
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


build
()¶“Construct internal trainable weights.
call
(x, mask=None)[source]¶Execute this layer on input tensors.
x = [atom_features, pair_features, pair_split, atom_split, atom_to_pair]
Parameters: 


Returns: 

deepchem.nn.weave_layers.
WeaveConcat
(batch_size, n_atom_input_feat=50, n_output=128, init='glorot_uniform', activation='tanh', **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
” Concat a batch of molecules into a batch of atoms
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.weave_layers.
WeaveGather
(batch_size, n_input=128, gaussian_expand=False, init='glorot_uniform', activation='tanh', epsilon=0.001, momentum=0.99, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
” Gather layer of Weave model a batch of normalized atom features go through a hidden layer, then summed to form molecular features
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


deepchem.nn.weave_layers.
WeaveLayer
(max_atoms, n_atom_input_feat=75, n_pair_input_feat=14, n_atom_output_feat=50, n_pair_output_feat=50, n_hidden_AA=50, n_hidden_PA=50, n_hidden_AP=50, n_hidden_PP=50, update_pair=True, init='glorot_uniform', activation='relu', dropout=None, **kwargs)[source]¶Bases: deepchem.nn.copy.Layer
” Main layer of Weave model For each molecule, atom features and pair features are recombined to generate new atom(pair) features
Detailed structure and explanations: https://arxiv.org/abs/1603.00856
__call__
(x)¶Wrapper around self.call(), for handling :param x: :type x: Can be a tensor or list/tuple of tensors.
add_loss
(losses, inputs=None)¶Adds losses to model.
add_weight
(shape, initializer, regularizer=None, name=None)¶Adds a weight variable to the layer.
Parameters: 


call
(x, mask=None)[source]¶Execute this layer on input tensors.
x = [atom_features, pair_features, atom_mask, pair_mask]
Parameters: 


Returns: 

Imports a number of useful deep learning primitives into one place.