# deepchem.models.tensorflow_models package¶

## deepchem.models.tensorflow_models.IRV module¶

class deepchem.models.tensorflow_models.IRV.TensorflowMultiTaskIRVClassifier(n_tasks, K=10, logdir=None, n_classes=2, penalty=0.0, penalty_type='l2', learning_rate=0.001, momentum=0.8, optimizer='adam', batch_size=50, verbose=True, seed=None, **kwargs)[source]
add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)
add_output_ops(graph, output)
add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)[source]

Constructs the graph architecture of IRV as described in:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2750043/

construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(logits, labels, weights)
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self

## deepchem.models.tensorflow_models.fcnet module¶

TensorFlow implementation of fully connected networks.

class deepchem.models.tensorflow_models.fcnet.TensorGraphMultiTaskClassifier(n_tasks, n_features, layer_sizes=[1000], weight_init_stddevs=0.02, bias_init_consts=1.0, weight_decay_penalty=0.0, weight_decay_penalty_type='l2', dropouts=0.5, activation_fns=<function relu>, n_classes=2, **kwargs)[source]
add_output(layer)
build()
create_submodel(layers=None, loss=None, optimizer=None)

Create an alternate objective for training one piece of a TensorGraph.

A TensorGraph consists of a set of layers, and specifies a loss function and optimizer to use for training those layers. Usually this is sufficient, but there are cases where you want to train different parts of a model separately. For example, a GAN consists of a generator and a discriminator. They are trained separately, and they use different loss functions.

A submodel defines an alternate objective to use in cases like this. It may optionally specify any of the following: a subset of layers in the model to train; a different loss function; and a different optimizer to use. This method creates a submodel, which you can then pass to fit() to use it for training.

Parameters: layers (list) – the list of layers to train. If None, all layers in the model will be trained. loss (Layer) – the loss function to optimize. If None, the model’s main loss function will be used. optimizer (Optimizer) – the optimizer to use for training. If None, the model’s main optimizer will be used. the newly created submodel, which can be passed to any of the fitting methods.
default_generator(dataset, epochs=1, predict=False, deterministic=True, pad_batches=True)[source]
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
evaluate_generator(feed_dict_generator, metrics, transformers=[], labels=None, outputs=None, weights=[], per_task_metrics=False)
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, checkpoint_interval=1000, deterministic=False, restore=False, submodel=None)

Train this model on a dataset.

Parameters: dataset (Dataset) – the Dataset to train on nb_epoch (int) – the number of epochs to train for max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded. checkpoint_interval (int) – the frequency at which to write checkpoints, measured in training steps. Set this to 0 to disable automatic checkpointing. deterministic (bool) – if True, the samples are processed in order. If False, a different random order is used for each epoch. restore (bool) – if True, restore the model from the most recent checkpoint and continue training from there. If False, retrain the model from scratch. submodel (Submodel) – an alternate training objective to use. This should have been created by calling create_submodel().
fit_generator(feed_dict_generator, max_checkpoints_to_keep=5, checkpoint_interval=1000, restore=False, submodel=None)

Train this model on data from a generator.

Parameters: feed_dict_generator (generator) – this should generate batches, each represented as a dict that maps Layers to values. max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded. checkpoint_interval (int) – the frequency at which to write checkpoints, measured in training steps. Set this to 0 to disable automatic checkpointing. restore (bool) – if True, restore the model from the most recent checkpoint and continue training from there. If False, retrain the model from scratch. submodel (Submodel) – an alternate training objective to use. This should have been created by calling create_submodel(). the average loss over the most recent checkpoint interval
fit_on_batch(X, y, w, submodel=None)
get_global_step()
get_layer_variables(layer)

Get the list of trainable variables in a layer of the graph.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_pickling_errors(obj, seen=None)
get_pre_q_input(input_layer)
get_task_type()

Currently models can only be classifiers or regressors.

load_from_dir(model_dir)
predict(dataset, transformers=[], outputs=None)[source]

Uses self to make predictions on provided Dataset object.

Parameters: dataset (dc.data.Dataset) – Dataset to make prediction on transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs[0] (single output). If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. y_pred numpy ndarray or list of numpy ndarrays
predict_on_batch(X, transformers=[], outputs=None)

Generates predictions for input samples, processing samples in a batch.

Parameters: X (ndarray) – the input data, as a Numpy array. transformers (List) – List of dc.trans.Transformers A Numpy array of predictions.
predict_on_generator(generator, transformers=[], outputs=None)
Parameters: generator (Generator) – Generator that constructs feed dictionaries for TensorGraph. transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs. If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. Returns – y_pred: numpy ndarray of shape (n_samples, n_classes*n_tasks)
predict_proba(dataset, transformers=[], outputs=None)[source]
predict_proba_on_batch(X, transformers=[], outputs=None)

Generates predictions for input samples, processing samples in a batch.

Parameters: X (ndarray) – the input data, as a Numpy array. transformers (List) – List of dc.trans.Transformers A Numpy array of predictions.
predict_proba_on_generator(generator, transformers=[], outputs=None)
Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
reload()

restore()

Reload the values of all variables from the most recent checkpoint file.

save()
save_checkpoint(max_checkpoints_to_keep=5)

Save a checkpoint to disk.

Usually you do not need to call this method, since fit() saves checkpoints automatically. If you have disabled automatic checkpointing during fitting, this can be called to manually write checkpoints.

Parameters: max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded.
set_loss(layer)
set_optimizer(optimizer)

Set the optimizer to use for fitting.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
topsort()
class deepchem.models.tensorflow_models.fcnet.TensorGraphMultiTaskFitTransformRegressor(n_tasks, n_features, fit_transformers=[], n_evals=1, batch_size=50, **kwargs)[source]

Implements a TensorGraphMultiTaskRegressor that performs on-the-fly transformation during fit/predict.

Example:

>>> n_samples = 10
>>> n_features = 3
>>> ids = np.arange(n_samples)
>>> X = np.random.rand(n_samples, n_features, n_features)
>>> dataset = dc.data.NumpyDataset(X, y, w, ids)
>>> fit_transformers = [dc.trans.CoulombFitTransformer(dataset)]
...     dropouts=[0.], learning_rate=0.003, weight_init_stddevs=[np.sqrt(6)/np.sqrt(1000)],
...     batch_size=n_samples, fit_transformers=fit_transformers, n_evals=1)
n_features after fit_transform: 12

add_output(layer)
build()
create_submodel(layers=None, loss=None, optimizer=None)

Create an alternate objective for training one piece of a TensorGraph.

A TensorGraph consists of a set of layers, and specifies a loss function and optimizer to use for training those layers. Usually this is sufficient, but there are cases where you want to train different parts of a model separately. For example, a GAN consists of a generator and a discriminator. They are trained separately, and they use different loss functions.

A submodel defines an alternate objective to use in cases like this. It may optionally specify any of the following: a subset of layers in the model to train; a different loss function; and a different optimizer to use. This method creates a submodel, which you can then pass to fit() to use it for training.

Parameters: layers (list) – the list of layers to train. If None, all layers in the model will be trained. loss (Layer) – the loss function to optimize. If None, the model’s main loss function will be used. optimizer (Optimizer) – the optimizer to use for training. If None, the model’s main optimizer will be used. the newly created submodel, which can be passed to any of the fitting methods.
default_generator(dataset, epochs=1, predict=False, deterministic=True, pad_batches=True)[source]
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
evaluate_generator(feed_dict_generator, metrics, transformers=[], labels=None, outputs=None, weights=[], per_task_metrics=False)
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, checkpoint_interval=1000, deterministic=False, restore=False, submodel=None)

Train this model on a dataset.

Parameters: dataset (Dataset) – the Dataset to train on nb_epoch (int) – the number of epochs to train for max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded. checkpoint_interval (int) – the frequency at which to write checkpoints, measured in training steps. Set this to 0 to disable automatic checkpointing. deterministic (bool) – if True, the samples are processed in order. If False, a different random order is used for each epoch. restore (bool) – if True, restore the model from the most recent checkpoint and continue training from there. If False, retrain the model from scratch. submodel (Submodel) – an alternate training objective to use. This should have been created by calling create_submodel().
fit_generator(feed_dict_generator, max_checkpoints_to_keep=5, checkpoint_interval=1000, restore=False, submodel=None)

Train this model on data from a generator.

Parameters: feed_dict_generator (generator) – this should generate batches, each represented as a dict that maps Layers to values. max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded. checkpoint_interval (int) – the frequency at which to write checkpoints, measured in training steps. Set this to 0 to disable automatic checkpointing. restore (bool) – if True, restore the model from the most recent checkpoint and continue training from there. If False, retrain the model from scratch. submodel (Submodel) – an alternate training objective to use. This should have been created by calling create_submodel(). the average loss over the most recent checkpoint interval
fit_on_batch(X, y, w, submodel=None)
get_global_step()
get_layer_variables(layer)

Get the list of trainable variables in a layer of the graph.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_pickling_errors(obj, seen=None)
get_pre_q_input(input_layer)
get_task_type()

Currently models can only be classifiers or regressors.

load_from_dir(model_dir)
predict(dataset, transformers=[], outputs=None)

Uses self to make predictions on provided Dataset object.

Parameters: dataset (dc.data.Dataset) – Dataset to make prediction on transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs[0] (single output). If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. results numpy ndarray or list of numpy ndarrays
predict_on_batch(X, transformers=[], outputs=None)

Generates predictions for input samples, processing samples in a batch.

Parameters: X (ndarray) – the input data, as a Numpy array. transformers (List) – List of dc.trans.Transformers A Numpy array of predictions.
predict_on_generator(generator, transformers=[], outputs=None)[source]
predict_proba(dataset, transformers=[], outputs=None)
Parameters: dataset (dc.data.Dataset) – Dataset to make prediction on transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs[0] (single output). If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. y_pred numpy ndarray or list of numpy ndarrays
predict_proba_on_batch(X, transformers=[], outputs=None)

Generates predictions for input samples, processing samples in a batch.

Parameters: X (ndarray) – the input data, as a Numpy array. transformers (List) – List of dc.trans.Transformers A Numpy array of predictions.
predict_proba_on_generator(generator, transformers=[], outputs=None)
Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
reload()

restore()

Reload the values of all variables from the most recent checkpoint file.

save()
save_checkpoint(max_checkpoints_to_keep=5)

Save a checkpoint to disk.

Usually you do not need to call this method, since fit() saves checkpoints automatically. If you have disabled automatic checkpointing during fitting, this can be called to manually write checkpoints.

Parameters: max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded.
set_loss(layer)
set_optimizer(optimizer)

Set the optimizer to use for fitting.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
topsort()
class deepchem.models.tensorflow_models.fcnet.TensorGraphMultiTaskRegressor(n_tasks, n_features, layer_sizes=[1000], weight_init_stddevs=0.02, bias_init_consts=1.0, weight_decay_penalty=0.0, weight_decay_penalty_type='l2', dropouts=0.5, activation_fns=<function relu>, **kwargs)[source]
add_output(layer)
build()
create_submodel(layers=None, loss=None, optimizer=None)

Create an alternate objective for training one piece of a TensorGraph.

A TensorGraph consists of a set of layers, and specifies a loss function and optimizer to use for training those layers. Usually this is sufficient, but there are cases where you want to train different parts of a model separately. For example, a GAN consists of a generator and a discriminator. They are trained separately, and they use different loss functions.

A submodel defines an alternate objective to use in cases like this. It may optionally specify any of the following: a subset of layers in the model to train; a different loss function; and a different optimizer to use. This method creates a submodel, which you can then pass to fit() to use it for training.

Parameters: layers (list) – the list of layers to train. If None, all layers in the model will be trained. loss (Layer) – the loss function to optimize. If None, the model’s main loss function will be used. optimizer (Optimizer) – the optimizer to use for training. If None, the model’s main optimizer will be used. the newly created submodel, which can be passed to any of the fitting methods.
default_generator(dataset, epochs=1, predict=False, deterministic=True, pad_batches=True)[source]
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
evaluate_generator(feed_dict_generator, metrics, transformers=[], labels=None, outputs=None, weights=[], per_task_metrics=False)
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, checkpoint_interval=1000, deterministic=False, restore=False, submodel=None)

Train this model on a dataset.

Parameters: dataset (Dataset) – the Dataset to train on nb_epoch (int) – the number of epochs to train for max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded. checkpoint_interval (int) – the frequency at which to write checkpoints, measured in training steps. Set this to 0 to disable automatic checkpointing. deterministic (bool) – if True, the samples are processed in order. If False, a different random order is used for each epoch. restore (bool) – if True, restore the model from the most recent checkpoint and continue training from there. If False, retrain the model from scratch. submodel (Submodel) – an alternate training objective to use. This should have been created by calling create_submodel().
fit_generator(feed_dict_generator, max_checkpoints_to_keep=5, checkpoint_interval=1000, restore=False, submodel=None)

Train this model on data from a generator.

Parameters: feed_dict_generator (generator) – this should generate batches, each represented as a dict that maps Layers to values. max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded. checkpoint_interval (int) – the frequency at which to write checkpoints, measured in training steps. Set this to 0 to disable automatic checkpointing. restore (bool) – if True, restore the model from the most recent checkpoint and continue training from there. If False, retrain the model from scratch. submodel (Submodel) – an alternate training objective to use. This should have been created by calling create_submodel(). the average loss over the most recent checkpoint interval
fit_on_batch(X, y, w, submodel=None)
get_global_step()
get_layer_variables(layer)

Get the list of trainable variables in a layer of the graph.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_pickling_errors(obj, seen=None)
get_pre_q_input(input_layer)
get_task_type()

Currently models can only be classifiers or regressors.

load_from_dir(model_dir)
predict(dataset, transformers=[], outputs=None)

Uses self to make predictions on provided Dataset object.

Parameters: dataset (dc.data.Dataset) – Dataset to make prediction on transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs[0] (single output). If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. results numpy ndarray or list of numpy ndarrays
predict_on_batch(X, transformers=[], outputs=None)

Generates predictions for input samples, processing samples in a batch.

Parameters: X (ndarray) – the input data, as a Numpy array. transformers (List) – List of dc.trans.Transformers A Numpy array of predictions.
predict_on_generator(generator, transformers=[], outputs=None)
Parameters: generator (Generator) – Generator that constructs feed dictionaries for TensorGraph. transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs. If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. Returns – y_pred: numpy ndarray of shape (n_samples, n_classes*n_tasks)
predict_proba(dataset, transformers=[], outputs=None)
Parameters: dataset (dc.data.Dataset) – Dataset to make prediction on transformers (list) – List of dc.trans.Transformers. outputs (object) – If outputs is None, then will assume outputs = self.outputs[0] (single output). If outputs is a Layer/Tensor, then will evaluate and return as a single ndarray. If outputs is a list of Layers/Tensors, will return a list of ndarrays. y_pred numpy ndarray or list of numpy ndarrays
predict_proba_on_batch(X, transformers=[], outputs=None)

Generates predictions for input samples, processing samples in a batch.

Parameters: X (ndarray) – the input data, as a Numpy array. transformers (List) – List of dc.trans.Transformers A Numpy array of predictions.
predict_proba_on_generator(generator, transformers=[], outputs=None)
Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
reload()

restore()

Reload the values of all variables from the most recent checkpoint file.

save()
save_checkpoint(max_checkpoints_to_keep=5)

Save a checkpoint to disk.

Usually you do not need to call this method, since fit() saves checkpoints automatically. If you have disabled automatic checkpointing during fitting, this can be called to manually write checkpoints.

Parameters: max_checkpoints_to_keep (int) – the maximum number of checkpoints to keep. Older checkpoints are discarded.
set_loss(layer)
set_optimizer(optimizer)

Set the optimizer to use for fitting.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
topsort()
class deepchem.models.tensorflow_models.fcnet.TensorflowMultiTaskClassifier(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.001, momentum=0.9, optimizer='adam', batch_size=50, n_classes=2, pad_batches=False, verbose=True, seed=None, **kwargs)[source]

Implements an icml model as configured in a model_config.proto.

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size x n_classes.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

Replace logits with softmax outputs.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)[source]

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)[source]

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(logits, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: logits – Tensor with shape batch_size x n_classes containing logits. labels – Tensor with shape batch_size x n_classes containing true labels in a one-hot encoding. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. Note that the output arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
class deepchem.models.tensorflow_models.fcnet.TensorflowMultiTaskFitTransformRegressor(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.002, momentum=0.8, optimizer='adam', batch_size=50, fit_transformers=[], n_evals=1, verbose=True, seed=None, **kwargs)[source]

Implements a TensorflowMultiTaskRegressor that performs on-the-fly transformation during fit/predict

Example:

>>> n_samples = 10
>>> n_features = 3
>>> ids = np.arange(n_samples)
>>> X = np.random.rand(n_samples, n_features, n_features)
>>> dataset = dc.data.NumpyDataset(X, y, w, ids)
>>> fit_transformers = [dc.trans.CoulombFitTransformer(dataset)]
...     dropouts=[0.], learning_rate=0.003, weight_init_stddevs=[np.sqrt(6)/np.sqrt(1000)],
...     batch_size=n_samples, fit_transformers=fit_transformers, n_evals=1)
n_features after fit_transform: 12

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

No-op for regression models since no softmax.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(output, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with shape batch_size containing predicted values. labels – Tensor with shape batch_size containing true values. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)[source]

Perform fit transformations on each minibatch. Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)[source]
Return model output for the provided input. Each example is evaluated
self.n_evals times.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
class deepchem.models.tensorflow_models.fcnet.TensorflowMultiTaskRegressor(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.001, momentum=0.9, optimizer='adam', batch_size=50, n_classes=2, pad_batches=False, verbose=True, seed=None, **kwargs)[source]

Implements an icml model as configured in a model_config.proto.

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

No-op for regression models since no softmax.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)[source]

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)[source]

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(output, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with shape batch_size containing predicted values. labels – Tensor with shape batch_size containing true values. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self

## deepchem.models.tensorflow_models.lr module¶

Created on Tue Nov 08 14:10:02 2016

@author: Zhenqin Wu

class deepchem.models.tensorflow_models.lr.TensorflowLogisticRegression(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.001, momentum=0.9, optimizer='adam', batch_size=50, n_classes=2, pad_batches=False, verbose=True, seed=None, **kwargs)[source]

A simple tensorflow based logistic regression model.

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)[source]
add_output_ops(graph, output)[source]
add_training_cost(graph, name_scopes, output, labels, weights)[source]
build(graph, name_scopes, training)[source]

Constructs the graph architecture of model: n_tasks * sigmoid nodes.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)[source]
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(logits, labels, weights)[source]
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)[source]
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)[source]
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
deepchem.models.tensorflow_models.lr.weight_decay(penalty_type, penalty)[source]

## deepchem.models.tensorflow_models.progressive_joint module¶

class deepchem.models.tensorflow_models.progressive_joint.ProgressiveJointRegressor(n_tasks, n_features, alpha_init_stddevs=[0.02], **kwargs)[source]

Implements a progressive multitask neural network.

Progressive Networks: https://arxiv.org/pdf/1606.04671v3.pdf

Progressive networks allow for multitask learning where each task gets a new column of weights. As a result, there is no exponential forgetting where previous tasks are ignored.

TODO(rbharath): This class is unnecessarily complicated. Can we simplify the structure of the code here?

add_adapter(all_layers, task, layer_num)[source]

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

No-op for regression models since no softmax.

add_training_cost(graph, name_scopes, output, labels, weights)
add_training_costs(graph, name_scopes, output, labels, weights)[source]
build(graph, name_scopes, training)[source]

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)[source]

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(output, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with shape batch_size containing predicted values. labels – Tensor with shape batch_size containing true values. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)[source]

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()
get_training_op(graph, loss)[source]

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)[source]

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self

class deepchem.models.tensorflow_models.progressive_multitask.ProgressiveMultitaskRegressor(n_tasks, n_features, alpha_init_stddevs=[0.02], **kwargs)[source]

Implements a progressive multitask neural network.

Progressive Networks: https://arxiv.org/pdf/1606.04671v3.pdf

Progressive networks allow for multitask learning where each task gets a new column of weights. As a result, there is no exponential forgetting where previous tasks are ignored.

TODO(rbharath): This class is unnecessarily complicated. Can we simplify the structure of the code here?

add_adapter(all_layers, task, layer_num)[source]

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

No-op for regression models since no softmax.

add_placeholders(graph, name_scopes)[source]

Adds all placeholders for this model.

add_progressive_lattice(graph, name_scopes, training)[source]

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
add_task_training_costs(graph, name_scopes, outputs, labels, weights)[source]

TODO(rbharath): Figure out how to support weight decay for this model. Since each task is trained separately, weight decay should only be used on weights in column for that task.

Parameters: graph (tf.Graph) – Graph for the model. name_scopes (dict) – Contains all the scopes for model outputs (list) – List of output tensors from model. weights (list) – List of weight placeholders for model.
add_training_cost(graph, name_scopes, output, labels, weights)
add_training_costs(graph, name_scopes, output, labels, weights)[source]
build(graph, name_scopes, training)

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x n_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)[source]

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)[source]

Returns a TensorflowGraph object.

construct_task_feed_dict(this_task, X_b, y_b=None, w_b=None, ids_b=None)[source]

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
cost(output, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with shape batch_size containing predicted values. labels – Tensor with shape batch_size containing true values. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, tasks=None, close_session=True, max_checkpoints_to_keep=5, **kwargs)[source]

Fit the model.

Progressive networks are fit by training one task at a time. Iteratively fits one task at a time with other weights frozen.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

fit_task(sess, dataset, task, task_train_op, nb_epoch=10, log_every_N_batches=50, checkpoint_interval=10)[source]

Fit the model.

TODO(rbharath): Figure out if the logging will work correctly with the global_step set as it is.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data task (int) – The index of the task to train on. nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_training_op(graph, losses, task)[source]

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Parameters: graph (tf.Graph) – Graph for this op losses (dict) – Dictionary mapping task to losses A training op.
get_task_type()
get_training_op(graph, loss)[source]

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X, pad_batch=False)[source]

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self

class deepchem.models.tensorflow_models.robust_multitask.RobustMultitaskClassifier(n_tasks, n_features, logdir=None, bypass_layer_sizes=[100], bypass_weight_init_stddevs=[0.02], bypass_bias_init_consts=[1.0], bypass_dropouts=[0.5], **kwargs)[source]

Implements a neural network for robust multitasking.

Key idea is to have bypass layers that feed directly from features to task output. Hopefully will allow tasks to route around bad multitasking.

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size x n_classes.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

Replace logits with softmax outputs.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)[source]

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x num_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(logits, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: logits – Tensor with shape batch_size x n_classes containing logits. labels – Tensor with shape batch_size x n_classes containing true labels in a one-hot encoding. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. Note that the output arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
class deepchem.models.tensorflow_models.robust_multitask.RobustMultitaskRegressor(n_tasks, n_features, logdir=None, bypass_layer_sizes=[100], bypass_weight_init_stddevs=[0.02], bypass_bias_init_consts=[1.0], bypass_dropouts=[0.5], **kwargs)[source]

Implements a neural network for robust multitasking.

Key idea is to have bypass layers that feed directly from features to task output. Hopefully will allow tasks to route around bad multitasking.

add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

No-op for regression models since no softmax.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)[source]

Constructs the graph architecture as specified in its config.

This method creates the following Placeholders:
mol_features: Molecule descriptor (e.g. fingerprint) tensor with shape
batch_size x num_features.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)

Construct a feed dictionary from minibatch data.

TODO(rbharath): ids_b is not used here. Can we remove it?

Parameters: X_b – np.ndarray of shape (batch_size, n_features) y_b – np.ndarray of shape (batch_size, n_tasks) w_b – np.ndarray of shape (batch_size, n_tasks) ids_b – List of length (batch_size) with datapoint identifiers.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(output, labels, weights)

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with shape batch_size containing predicted values. labels – Tensor with shape batch_size containing true values. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self

## deepchem.models.tensorflow_models.utils module¶

Utils for graph convolution models.

deepchem.models.tensorflow_models.utils.Kurtosis(tensor, reduction_indices=None)[source]

Compute kurtosis, the fourth standardized moment minus three.

Parameters: tensor – Input tensor. reduction_indices – Axes to reduce across. If None, reduce to a scalar. A tensor with the same type as the input tensor.
deepchem.models.tensorflow_models.utils.Mask(t, mask)[source]

Apply a mask to a tensor.

If not None, mask should be a t.shape[:-1] tensor of 0,1 values.

Parameters: t – Input tensor. mask – Boolean mask with shape == t.shape[:-1]. If None, nothing happens. A tensor with the same shape as the input tensor. ValueError – If shapes do not match.
deepchem.models.tensorflow_models.utils.Mean(tensor, reduction_indices=None, mask=None)[source]

Compute mean using Sum and Mul for better GPU performance.

See tf.nn.moments for additional notes on this approach.

Parameters: tensor – Input tensor. reduction_indices – Axes to reduce across. If None, reduce to a scalar. mask – Mask to apply to tensor. A tensor with the same type as the input tensor.
deepchem.models.tensorflow_models.utils.Moment(k, tensor, standardize=False, reduction_indices=None, mask=None)[source]

Compute the k-th central moment of a tensor, possibly standardized.

Parameters: k – Which moment to compute. 1 = mean, 2 = variance, etc. tensor – Input tensor. standardize – If True, returns the standardized moment, i.e. the central moment divided by the n-th power of the standard deviation. reduction_indices – Axes to reduce across. If None, reduce to a scalar. mask – Mask to apply to tensor. The mean and the requested moment.
deepchem.models.tensorflow_models.utils.ParseCheckpoint(checkpoint)[source]

Parse a checkpoint file.

Parameters: checkpoint – Path to checkpoint. The checkpoint is either a serialized CheckpointState proto or an actual checkpoint file. The path to an actual checkpoint file.
deepchem.models.tensorflow_models.utils.Skewness(tensor, reduction_indices=None)[source]

Compute skewness, the third standardized moment.

Parameters: tensor – Input tensor. reduction_indices – Axes to reduce across. If None, reduce to a scalar. A tensor with the same type as the input tensor.
deepchem.models.tensorflow_models.utils.StringToOp(string)[source]

Get a TensorFlow op from a string.

Parameters: string – String description of an op, such as ‘sum’ or ‘mean’. A TensorFlow op. NotImplementedError – If string does not match a supported operation.
deepchem.models.tensorflow_models.utils.Variance(tensor, reduction_indices=None, mask=None)[source]

Compute variance.

Parameters: tensor – Input tensor. reduction_indices – Axes to reduce across. If None, reduce to a scalar. mask – Mask to apply to tensor. A tensor with the same type as the input tensor.

## Module contents¶

Helper operations and classes for general model building.

class deepchem.models.tensorflow_models.TensorflowClassifier(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.001, momentum=0.9, optimizer='adam', batch_size=50, n_classes=2, pad_batches=False, verbose=True, seed=None, **kwargs)[source]

Classification model.

Subclasses must set the following attributes:
output: logits op(s) used for computing classification loss and predicted
add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)[source]

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size x n_classes.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)

Replace logits with softmax outputs.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)

Define the core graph.

NOTE(user): Operations defined here should be in their own name scope to avoid any ambiguity when restoring checkpoints. :raises: NotImplementedError

if not overridden by concrete subclass.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)

Transform a minibatch of data into a feed_dict.

Raises: NotImplementedError – if not overridden by concrete subclass.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(logits, labels, weights)[source]

Calculate single-task training cost for a batch of examples.

Parameters: logits – Tensor with shape batch_size x n_classes containing logits. labels – Tensor with shape batch_size x n_classes containing true labels in a one-hot encoding. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()[source]
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)[source]

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)[source]

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. Note that the output arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
class deepchem.models.tensorflow_models.TensorflowGraph(graph, session, name_scopes, output, labels, weights, loss)[source]

Bases: object

Simple class that holds information needed to run Tensorflow graph.

static get_feed_dict(named_values)[source]
static get_placeholder_scope(graph, name_scopes)[source]

Gets placeholder scope.

static shared_name_scope(name, graph, name_scopes)[source]

Returns a singleton TensorFlow scope with the given name.

Used to prevent ‘_1’-appended scopes when sharing scopes with child classes.

Parameters: name – String. Name scope for group of operations. tf.name_scope with the provided name.
class deepchem.models.tensorflow_models.TensorflowGraphModel(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.001, momentum=0.9, optimizer='adam', batch_size=50, n_classes=2, pad_batches=False, verbose=True, seed=None, **kwargs)[source]

Parent class for deepchem Tensorflow models.

Classifier:
n_classes

Has the following attributes:

placeholder_root: String placeholder prefix, used to create
placeholder_scope.

Generic base class for defining, training, and evaluating TensorflowGraphs.

Subclasses must implement the following methods:
Parameters: train – If True, model is in training mode. logdir – Directory for output files.
add_example_weight_placeholders(graph, name_scopes)[source]

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)[source]

This method creates the following Placeholders for each task:
labels_%d: Float label tensor. For classification tasks, this tensor will
have shape batch_size x n_classes. For regression tasks, this tensor will have shape batch_size.
Raises: NotImplementedError – if not overridden by concrete subclass.
add_output_ops(graph, output)[source]

Replace logits with softmax outputs.

add_training_cost(graph, name_scopes, output, labels, weights)[source]
build(graph, name_scopes, training)[source]

Define the core graph.

NOTE(user): Operations defined here should be in their own name scope to avoid any ambiguity when restoring checkpoints. :raises: NotImplementedError

if not overridden by concrete subclass.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)[source]

Transform a minibatch of data into a feed_dict.

Raises: NotImplementedError – if not overridden by concrete subclass.
construct_graph(training, seed)[source]

Returns a TensorflowGraph object.

cost(output, labels, weights)[source]

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with model outputs. labels – Tensor with true labels. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example. For use in subclasses that want to calculate additional costs.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)[source]

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()[source]
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

get_training_op(graph, loss)[source]

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])[source]

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X, **kwargs)

Makes predictions on given batch of new data.

Parameters: X (np.ndarray) – Features
predict_proba(dataset, transformers=[], n_classes=2)[source]

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()[source]

Loads model from disk. Thin wrapper around restore() for consistency.

restore()[source]

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()[source]

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
class deepchem.models.tensorflow_models.TensorflowRegressor(n_tasks, n_features, logdir=None, layer_sizes=[1000], weight_init_stddevs=[0.02], bias_init_consts=[1.0], penalty=0.0, penalty_type='l2', dropouts=[0.5], learning_rate=0.001, momentum=0.9, optimizer='adam', batch_size=50, n_classes=2, pad_batches=False, verbose=True, seed=None, **kwargs)[source]

Regression model.

Subclasses must set the following attributes:
output: Op(s) used for computing regression loss and predicted regression
add_example_weight_placeholders(graph, name_scopes)

This method creates the following Placeholders for each task:
weights_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_label_placeholders(graph, name_scopes)[source]

This method creates the following Placeholders for each task:
labels_%d: Label tensor with shape batch_size.

Placeholders are wrapped in identity ops to avoid the error caused by feeding and fetching the same tensor.

add_output_ops(graph, output)[source]

No-op for regression models since no softmax.

add_training_cost(graph, name_scopes, output, labels, weights)
build(graph, name_scopes, training)

Define the core graph.

NOTE(user): Operations defined here should be in their own name scope to avoid any ambiguity when restoring checkpoints. :raises: NotImplementedError

if not overridden by concrete subclass.
construct_feed_dict(X_b, y_b=None, w_b=None, ids_b=None)

Transform a minibatch of data into a feed_dict.

Raises: NotImplementedError – if not overridden by concrete subclass.
construct_graph(training, seed)

Returns a TensorflowGraph object.

cost(output, labels, weights)[source]

Calculate single-task training cost for a batch of examples.

Parameters: output – Tensor with shape batch_size containing predicted values. labels – Tensor with shape batch_size containing true values. weights – Tensor with shape batch_size containing example weights. A tensor with shape batch_size containing the weighted cost for each example.
evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters: dataset (dc.data.Dataset) – Dataset object. metric (deepchem.metrics.Metric) – Evaluation metric transformers (list) – List of deepchem.transformers.Transformer per_task_metrics (bool) – If True, return per-task scores. Maps tasks to scores under metric. dict
fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)

Fit the model.

Parameters: dataset (dc.data.Dataset) – Dataset object holding training data nb_epoch (10) – Number of training epochs. max_checkpoints_to_keep (int) – Maximum number of checkpoints to keep; older checkpoints will be deleted. log_every_N_batches (int) – Report every N batches. Useful for training on very large datasets, where epochs can take long time to finish. checkpoint_interval (int) – Frequency at which to write checkpoints, measured in epochs AssertionError – If model is not in training mode.
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()
get_params(deep=True)

Get parameters for this estimator.

Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. params – Parameter names mapped to their values. mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()[source]
get_training_op(graph, loss)

Get training op for applying gradients to variables.

Subclasses that need to do anything fancy with gradients should override this method.

Returns: A training op.

predict(dataset, transformers=[])

Uses self to make predictions on provided Dataset object.

Returns: numpy ndarray of shape (n_samples,) y_pred
predict_on_batch(X)[source]

Return model output for the provided input.

Restore(checkpoint) must have previously been called on this object.

Parameters: dataset – dc.data.Dataset object. Tuple of three numpy arrays with shape n_examples x n_tasks – output: Model outputs. labels: True labels. weights: Example weights. Note that the output and labels arrays may be more than 2D, e.g. for classifier models that return class probabilities. x ... AssertionError – If model is not in evaluation mode. ValueError – If output and labels are not both 3D or both 2D.
predict_proba(dataset, transformers=[], n_classes=2)

TODO: Do transformers even make sense here?

Returns: numpy ndarray of shape (n_samples, n_classes*n_tasks) y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters: X (np.ndarray) – Features
reload()

Loads model from disk. Thin wrapper around restore() for consistency.

restore()

Restores the model from the provided training checkpoint.

Parameters: checkpoint – string. Path to checkpoint file.
save()

No-op since tf models save themselves during fit()

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns: self
deepchem.models.tensorflow_models.softmax(x)[source]

Simple numpy softmax implementation