deepchem.models.tf_new_models package

Subpackages

Submodules

deepchem.models.tf_new_models.graph_models module

Convenience classes for assembling graph models.

class deepchem.models.tf_new_models.graph_models.AlternateSequentialWeaveGraph(batch_size, max_atoms=50, n_atom_feat=75, n_pair_feat=14)[source]

Bases: deepchem.models.tf_new_models.graph_models.SequentialGraph

Alternate implementation of SequentialGraph for Weave models

add(layer)[source]

Adds a new layer to model.

get_graph_topology()
get_layer(layer_id)
get_num_output_features()

Gets the output shape of the featurization layers of the network

return_inputs()
return_outputs()
class deepchem.models.tf_new_models.graph_models.SequentialDAGGraph(n_atom_feat=75, max_atoms=50)[source]

Bases: deepchem.models.tf_new_models.graph_models.SequentialGraph

SequentialGraph for DAG models

add(layer)[source]

Adds a new layer to model.

get_graph_topology()
get_layer(layer_id)
get_num_output_features()

Gets the output shape of the featurization layers of the network

return_inputs()
return_outputs()
class deepchem.models.tf_new_models.graph_models.SequentialDTNNGraph(n_distance=100, distance_min=-1.0, distance_max=18.0)[source]

Bases: deepchem.models.tf_new_models.graph_models.SequentialGraph

An analog of Keras Sequential class for Coulomb Matrix data.

automatically generates and passes topology placeholders to each layer.

add(layer)[source]

Adds a new layer to model.

get_graph_topology()
get_layer(layer_id)
get_num_output_features()

Gets the output shape of the featurization layers of the network

return_inputs()
return_outputs()
class deepchem.models.tf_new_models.graph_models.SequentialGraph(n_feat)[source]

Bases: object

An analog of Keras Sequential class for Graph data.

Like the Sequential class from Keras, but automatically passes topology placeholders from GraphTopology to each graph layer (from layers) added to the network. Non graph layers don’t get the extra placeholders.

add(layer)[source]

Adds a new layer to model.

get_graph_topology()[source]
get_layer(layer_id)[source]
get_num_output_features()[source]

Gets the output shape of the featurization layers of the network

return_inputs()[source]
return_outputs()[source]
class deepchem.models.tf_new_models.graph_models.SequentialSupportGraph(n_feat)[source]

Bases: object

An analog of Keras Sequential model for test/support models.

add(layer)[source]

Adds a layer to both test/support stacks.

Note that the layer transformation is performed independently on the test/support tensors.

add_support(layer)[source]

Adds a layer to support.

add_test(layer)[source]

Adds a layer to test.

get_support_output()[source]
get_test_output()[source]
join(layer)[source]

Joins test and support to a two input two output layer

return_inputs()[source]
return_outputs()[source]
class deepchem.models.tf_new_models.graph_models.SequentialWeaveGraph(max_atoms=50, n_atom_feat=75, n_pair_feat=14)[source]

Bases: deepchem.models.tf_new_models.graph_models.SequentialGraph

SequentialGraph for Weave models

add(layer)[source]

Adds a new layer to model.

get_graph_topology()
get_layer(layer_id)
get_num_output_features()

Gets the output shape of the featurization layers of the network

return_inputs()
return_outputs()

deepchem.models.tf_new_models.graph_topology module

Manages Placeholders for Graph convolution networks.

class deepchem.models.tf_new_models.graph_topology.AlternateWeaveGraphTopology(batch_size, max_atoms=50, n_atom_feat=75, n_pair_feat=14, name='Weave_topology')[source]

Bases: deepchem.models.tf_new_models.graph_topology.GraphTopology

Manages placeholders associated with batch of graphs and their topology

batch_to_feed_dict(batch)[source]

Converts the current batch of WeaveMol into tensorflow feed_dict.

Assigns the atom features and pair features to the placeholders tensors

batch : np.ndarray
Array of WeaveMol
Returns:feed_dict – Can be merged with other feed_dicts for input into tensorflow
Return type:dict
get_atom_features_placeholder()
get_deg_adjacency_lists_placeholders()
get_deg_slice_placeholder()
get_input_placeholders()

All placeholders.

Contains atom_features placeholder and topology placeholders.

get_membership_placeholder()
get_pair_features_placeholder()[source]
get_topology_placeholders()

Returns topology placeholders

Consists of deg_slice_placeholder, membership_placeholder, and the deg_adj_list_placeholders.

class deepchem.models.tf_new_models.graph_topology.DAGGraphTopology(n_atom_feat=75, max_atoms=50, name='topology')[source]

Bases: deepchem.models.tf_new_models.graph_topology.GraphTopology

GraphTopology for DAG models

batch_to_feed_dict(batch)[source]

Converts the current batch of mol_graphs into tensorflow feed_dict.

Assigns the graph information in array of ConvMol objects to the placeholders tensors for DAG models

batch : np.ndarray
Array of ConvMol objects
Returns:feed_dict – Can be merged with other feed_dicts for input into tensorflow
Return type:dict
get_atom_features_placeholder()
get_calculation_orders_placeholder()[source]
get_deg_adjacency_lists_placeholders()
get_deg_slice_placeholder()
get_input_placeholders()

All placeholders.

Contains atom_features placeholder and topology placeholders.

get_membership_placeholder()
get_parents_placeholder()[source]
get_topology_placeholders()

Returns topology placeholders

Consists of deg_slice_placeholder, membership_placeholder, and the deg_adj_list_placeholders.

class deepchem.models.tf_new_models.graph_topology.DTNNGraphTopology(n_distance=100, distance_min=-1.0, distance_max=18.0, name='DTNN_topology')[source]

Bases: deepchem.models.tf_new_models.graph_topology.GraphTopology

Manages placeholders associated with batch of graphs and their topology

batch_to_feed_dict(batch)[source]

Converts the current batch of Coulomb Matrix into tensorflow feed_dict.

Assigns the atom number and distance info to the placeholders tensors

batch : np.ndarray
Array of Coulomb Matrix
Returns:feed_dict – Can be merged with other feed_dicts for input into tensorflow
Return type:dict
get_atom_features_placeholder()
get_atom_number_placeholder()[source]
get_deg_adjacency_lists_placeholders()
get_deg_slice_placeholder()
get_distance_placeholder()[source]
get_input_placeholders()

All placeholders.

Contains atom_features placeholder and topology placeholders.

get_membership_placeholder()
get_topology_placeholders()

Returns topology placeholders

Consists of deg_slice_placeholder, membership_placeholder, and the deg_adj_list_placeholders.

class deepchem.models.tf_new_models.graph_topology.GraphTopology(n_feat, name='topology', max_deg=10, min_deg=0)[source]

Bases: object

Manages placeholders associated with batch of graphs and their topology

batch_to_feed_dict(batch)[source]

Converts the current batch of mol_graphs into tensorflow feed_dict.

Assigns the graph information in array of ConvMol objects to the placeholders tensors

batch : np.ndarray
Array of ConvMol objects
Returns:feed_dict – Can be merged with other feed_dicts for input into tensorflow
Return type:dict
get_atom_features_placeholder()[source]
get_deg_adjacency_lists_placeholders()[source]
get_deg_slice_placeholder()[source]
get_input_placeholders()[source]

All placeholders.

Contains atom_features placeholder and topology placeholders.

get_membership_placeholder()[source]
get_topology_placeholders()[source]

Returns topology placeholders

Consists of deg_slice_placeholder, membership_placeholder, and the deg_adj_list_placeholders.

class deepchem.models.tf_new_models.graph_topology.WeaveGraphTopology(max_atoms=50, n_atom_feat=75, n_pair_feat=14, name='Weave_topology')[source]

Bases: deepchem.models.tf_new_models.graph_topology.GraphTopology

Manages placeholders associated with batch of graphs and their topology

batch_to_feed_dict(batch)[source]

Converts the current batch of WeaveMol into tensorflow feed_dict.

Assigns the atom features and pair features to the placeholders tensors

batch : np.ndarray
Array of WeaveMol
Returns:feed_dict – Can be merged with other feed_dicts for input into tensorflow
Return type:dict
get_atom_features_placeholder()
get_deg_adjacency_lists_placeholders()
get_deg_slice_placeholder()
get_input_placeholders()

All placeholders.

Contains atom_features placeholder and topology placeholders.

get_membership_placeholder()
get_pair_features_placeholder()[source]
get_topology_placeholders()

Returns topology placeholders

Consists of deg_slice_placeholder, membership_placeholder, and the deg_adj_list_placeholders.

deepchem.models.tf_new_models.graph_topology.merge_dicts(l)[source]

Convenience function to merge list of dictionaries.

deepchem.models.tf_new_models.graph_topology.merge_two_dicts(x, y)[source]

deepchem.models.tf_new_models.multitask_classifier module

Implements a multitask graph convolutional classifier.

class deepchem.models.tf_new_models.multitask_classifier.MultitaskGraphClassifier(model, n_tasks, n_feat, logdir=None, batch_size=50, final_loss='cross_entropy', learning_rate=0.001, optimizer_type='adam', learning_rate_decay_time=1000, beta1=0.9, beta2=0.999, pad_batches=True, verbose=True)[source]

Bases: deepchem.models.models.Model

add_optimizer()[source]
add_softmax(outputs)[source]

Replace logits with softmax outputs.

add_training_loss(final_loss, logits)[source]

Computes loss using logits.

build()[source]
construct_feed_dict(X_b, y_b=None, w_b=None, training=True)[source]

Get initial information about task normalization

evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters:
  • dataset (dc.data.Dataset) – Dataset object.
  • metric (deepchem.metrics.Metric) – Evaluation metric
  • transformers (list) – List of deepchem.transformers.Transformer
  • per_task_metrics (bool) – If True, return per-task scores.
Returns:

Maps tasks to scores under metric.

Return type:

dict

fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)[source]
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()[source]

Needed to use Model.predict() from superclass.

get_params(deep=True)

Get parameters for this estimator.

Parameters:deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:params – Parameter names mapped to their values.
Return type:mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

predict(dataset, transformers=[], **kwargs)[source]

Wraps predict to set batch_size/padding.

predict_on_batch(X)[source]

Return model output for the provided input.

predict_proba(dataset, transformers=[], n_classes=2, **kwargs)[source]

Wraps predict_proba to set batch_size/padding.

predict_proba_on_batch(X, n_classes=2)[source]

Returns class probabilities on batch

reload()

Reload trained model from disk.

save()[source]

No-op since this model doesn’t currently support saving...

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
Return type:self
deepchem.models.tf_new_models.multitask_classifier.get_loss_fn(final_loss)[source]

deepchem.models.tf_new_models.multitask_regressor module

Implements a multitask graph-convolutional regression.

class deepchem.models.tf_new_models.multitask_regressor.DTNNMultitaskGraphRegressor(model, n_tasks, n_feat, logdir=None, batch_size=50, final_loss='weighted_L2', learning_rate=0.001, optimizer_type='adam', learning_rate_decay_time=1000, beta1=0.9, beta2=0.999, pad_batches=True, verbose=True)[source]

Bases: deepchem.models.tf_new_models.multitask_regressor.MultitaskGraphRegressor

add_optimizer()
add_training_loss(final_loss, outputs)

Computes loss using logits.

build()[source]
construct_feed_dict(X_b, y_b=None, w_b=None, training=True)

Get initial information about task normalization

evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters:
  • dataset (dc.data.Dataset) – Dataset object.
  • metric (deepchem.metrics.Metric) – Evaluation metric
  • transformers (list) – List of deepchem.transformers.Transformer
  • per_task_metrics (bool) – If True, return per-task scores.
Returns:

Maps tasks to scores under metric.

Return type:

dict

fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()

Needed to use Model.predict() from superclass.

get_params(deep=True)

Get parameters for this estimator.

Parameters:deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:params – Parameter names mapped to their values.
Return type:mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

predict(dataset, transformers=[], **kwargs)

Wraps predict to set batch_size/padding.

predict_on_batch(X)

Return model output for the provided input.

predict_proba(dataset, transformers=[], batch_size=None, n_classes=2)

TODO: Do transformers even make sense here?

Returns:numpy ndarray of shape (n_samples, n_classes*n_tasks)
Return type:y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters:X (np.ndarray) – Features
reload()

Reload trained model from disk.

save()

No-op since this model doesn’t currently support saving...

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
Return type:self
class deepchem.models.tf_new_models.multitask_regressor.MultitaskGraphRegressor(model, n_tasks, n_feat, logdir=None, batch_size=50, final_loss='weighted_L2', learning_rate=0.001, optimizer_type='adam', learning_rate_decay_time=1000, beta1=0.9, beta2=0.999, pad_batches=True, verbose=True)[source]

Bases: deepchem.models.models.Model

add_optimizer()[source]
add_training_loss(final_loss, outputs)[source]

Computes loss using logits.

build()[source]
construct_feed_dict(X_b, y_b=None, w_b=None, training=True)[source]

Get initial information about task normalization

evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters:
  • dataset (dc.data.Dataset) – Dataset object.
  • metric (deepchem.metrics.Metric) – Evaluation metric
  • transformers (list) – List of deepchem.transformers.Transformer
  • per_task_metrics (bool) – If True, return per-task scores.
Returns:

Maps tasks to scores under metric.

Return type:

dict

fit(dataset, nb_epoch=10, max_checkpoints_to_keep=5, log_every_N_batches=50, checkpoint_interval=10, **kwargs)[source]
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()[source]

Needed to use Model.predict() from superclass.

get_params(deep=True)

Get parameters for this estimator.

Parameters:deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:params – Parameter names mapped to their values.
Return type:mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

predict(dataset, transformers=[], **kwargs)[source]

Wraps predict to set batch_size/padding.

predict_on_batch(X)[source]

Return model output for the provided input.

predict_proba(dataset, transformers=[], batch_size=None, n_classes=2)

TODO: Do transformers even make sense here?

Returns:numpy ndarray of shape (n_samples, n_classes*n_tasks)
Return type:y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters:X (np.ndarray) – Features
reload()

Reload trained model from disk.

save()[source]

No-op since this model doesn’t currently support saving...

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
Return type:self

deepchem.models.tf_new_models.support_classifier module

Train support-based models.

class deepchem.models.tf_new_models.support_classifier.SupportGraphClassifier(model, test_batch_size=10, support_batch_size=10, learning_rate=0.001, similarity='cosine', **kwargs)[source]

Bases: deepchem.models.models.Model

add_placeholders()[source]

Adds placeholders to graph.

add_training_loss()[source]

Adds training loss and scores for network.

construct_feed_dict(test, support, training=True, add_phase=False)[source]

Constructs tensorflow feed from test/support sets.

evaluate(dataset, metric, n_pos, n_neg, n_trials=1000, exclude_support=True)[source]

Evaluate performance on dataset according to metrics

Evaluates the performance of the trained model by sampling supports randomly for each task in dataset. For each sampled support, the accuracy of the model with support provided is computed on all data for that task. If exclude_support is True (by default), the support set is excluded from this accuracy calculation. exclude_support should be set to false if model’s memorization capacity wants to be evaluated.

Since the accuracy on a task is dependent on the choice of random support, the evaluation experiment is repeated n_trials times for each task. (Each task gets n_trials experiments). The computed accuracies are averaged across trials.

TODO(rbharath): Currently does not support any transformers.

Parameters:
  • dataset (dc.data.Dataset) – Dataset to test on.
  • metrics (dc.metrics.Metric) – Evaluation metric.
  • n_pos (int, optional) – Number of positive samples per support.
  • n_neg (int, optional) – Number of negative samples per support.
  • exclude_support (bool, optional) – Whether support set should be excluded when computing model accuracy.
fit(dataset, n_episodes_per_epoch=1000, nb_epochs=1, n_pos=1, n_neg=9, log_every_n_samples=10, **kwargs)[source]

Fits model on dataset using cached supports.

For each epcoh, sample n_episodes_per_epoch (support, test) pairs and does gradient descent.

Parameters:
  • dataset (dc.data.Dataset) – Dataset to fit model on.
  • nb_epochs (int, optional) – number of epochs of training.
  • n_episodes_per_epoch (int, optional) – Number of (support, test) pairs to sample and train on per epoch.
  • n_pos (int, optional) – Number of positive examples per support.
  • n_neg (int, optional) – Number of negative examples per support.
  • log_every_n_samples (int, optional) – Displays info every this number of samples
fit_on_batch(X, y, w)

Updates existing model with new information.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()

Get number of tasks.

get_params(deep=True)

Get parameters for this estimator.

Parameters:deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:params – Parameter names mapped to their values.
Return type:mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_scores()[source]

Adds tensor operations for computing scores.

Computes prediction yhat (eqn (1) in Matching networks) of class for test compounds.

get_task_type()

Currently models can only be classifiers or regressors.

get_training_op(loss)[source]

Attaches an optimizer to the graph.

predict(support, test)[source]

Makes predictions on test given support.

TODO(rbharath): Does not currently support any transforms. TODO(rbharath): Only for 1 task at a time currently. Is there a better way?

predict_on_batch(support, test_batch)[source]

Make predictions on batch of data.

predict_proba(support, test)[source]

Makes predictions on test given support.

TODO(rbharath): Does not currently support any transforms. TODO(rbharath): Only for 1 task at a time currently. Is there a better way? :param support: The support dataset :type support: dc.data.Dataset :param test: The test dataset

predict_proba_on_batch(support, test_batch)[source]

Make predictions on batch of data.

reload()

Reload trained model from disk.

save()[source]

Save all models

TODO(rbharath): Saving is not yet supported for this model.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
Return type:self

deepchem.models.tf_new_models.vina_model module

Implements Autodock Vina’s pose-generation in tensorflow.

class deepchem.models.tf_new_models.vina_model.VinaModel(max_local_steps=10, max_mutations=10)[source]

Bases: deepchem.models.models.Model

construct_graph(N_protein=1000, N_ligand=100, M=50, ndim=3, k=5, nbr_cutoff=6)[source]

Builds the computational graph for Vina.

evaluate(dataset, metrics, transformers=[], per_task_metrics=False)

Evaluates the performance of this model on specified dataset.

Parameters:
  • dataset (dc.data.Dataset) – Dataset object.
  • metric (deepchem.metrics.Metric) – Evaluation metric
  • transformers (list) – List of deepchem.transformers.Transformer
  • per_task_metrics (bool) – If True, return per-task scores.
Returns:

Maps tasks to scores under metric.

Return type:

dict

fit(X_protein, Z_protein, X_ligand, Z_ligand, y)[source]

Fit to actual data.

fit_on_batch(X, y, w)

Updates existing model with new information.

generate_conformation(protein, ligand, max_steps=10)[source]

Performs the global search for conformations.

get_model_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_num_tasks()

Get number of tasks.

get_params(deep=True)

Get parameters for this estimator.

Parameters:deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:params – Parameter names mapped to their values.
Return type:mapping of string to any
get_params_filename(model_dir)

Given model directory, obtain filename for the model itself.

get_task_type()

Currently models can only be classifiers or regressors.

mutate_conformer(protein, ligand)[source]

Performs a mutation on the ligand position.

predict(dataset, transformers=[], batch_size=None)

Uses self to make predictions on provided Dataset object.

Returns:numpy ndarray of shape (n_samples,)
Return type:y_pred
predict_on_batch(X, **kwargs)

Makes predictions on given batch of new data.

Parameters:X (np.ndarray) – Features
predict_proba(dataset, transformers=[], batch_size=None, n_classes=2)

TODO: Do transformers even make sense here?

Returns:numpy ndarray of shape (n_samples, n_classes*n_tasks)
Return type:y_pred
predict_proba_on_batch(X)

Makes predictions of class probabilities on given batch of new data.

Parameters:X (np.ndarray) – Features
reload()

Reload trained model from disk.

save()

Dispatcher function for saving.

Each subclass is responsible for overriding this method.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
Return type:self
deepchem.models.tf_new_models.vina_model.compute_closest_neighbors(coords, cells, atoms_in_cells, neighbor_cells, N, n_cells, ndim=3, k=5)[source]

Computes nearest neighbors from neighboring cells.

TODO(rbharath): Make this pass test

Parameters:
  • atoms_in_cells (list) – Of length n_cells. Each entry tensor of shape (k, ndim)
  • neighbor_cells (tf.Tensor) – Of shape (n_cells, 26).
  • N (int) – Number atoms
deepchem.models.tf_new_models.vina_model.compute_neighbor_cells(cells, ndim, n_cells)[source]

Compute neighbors of cells in grid.

# TODO(rbharath): Do we need to handle periodic boundary conditions properly here? # TODO(rbharath): This doesn’t handle boundaries well. We hard-code # looking for 26 neighbors, which isn’t right for boundary cells in # the cube.

Note n_cells is box_size**ndim. 26 is the number of neighbors of a cube in a grid (including diagonals).

Parameters:cells (tf.Tensor) – (n_cells, 26) shape.
deepchem.models.tf_new_models.vina_model.compute_neighbor_list(coords, nbr_cutoff, N, M, n_cells, ndim=3, k=5)[source]

Computes a neighbor list from atom coordinates.

Parameters:
  • coords (tf.Tensor) – Shape (N, ndim)
  • N (int) – Max number atoms
  • M (int) – Max number neighbors
  • ndim (int) – Dimensionality of space.
  • k (int) – Number of nearest neighbors to pull down.
Returns:

nbr_list – Shape (N, M) of atom indices

Return type:

tf.Tensor

deepchem.models.tf_new_models.vina_model.cutoff(d, x)[source]

Truncates interactions that are too far away.

deepchem.models.tf_new_models.vina_model.g(c, Nrot)[source]

Nonlinear function mapping interactions to free energy.

deepchem.models.tf_new_models.vina_model.gauss_1(d)[source]

Computes first Gaussian interaction term.

Note that d must be in Angstrom

deepchem.models.tf_new_models.vina_model.gauss_2(d)[source]

Computes second Gaussian interaction term.

Note that d must be in Angstrom.

deepchem.models.tf_new_models.vina_model.get_cells(start, stop, nbr_cutoff, ndim=3)[source]

Returns the locations of all grid points in box.

Suppose start is -10 Angstrom, stop is 10 Angstrom, nbr_cutoff is 1. Then would return a list of length 20^3 whose entries would be [(-10, -10, -10), (-10, -10, -9), ..., (9, 9, 9)]

Returns:cells – (box_size**ndim, ndim) shape.
Return type:tf.Tensor
deepchem.models.tf_new_models.vina_model.get_cells_for_atoms(coords, cells, N, n_cells, ndim=3)[source]

Compute the cells each atom belongs to.

Parameters:
  • coords (tf.Tensor) – Shape (N, ndim)
  • cells (tf.Tensor) – (box_size**ndim, ndim) shape.
Returns:

cells_for_atoms – Shape (N, 1)

Return type:

tf.Tensor

deepchem.models.tf_new_models.vina_model.h(d)[source]

Sum of energy terms used in Autodock Vina.

\[h_{t_i,t_j}(d) = w_1 extrm{gauss}_1(d) + w_2 extrm{gauss}_2(d) + w_3 extrm{repulsion}(d) + w_4 extrm{hydrophobic}(d) + w_5 extrm{hbond}(d)\]
deepchem.models.tf_new_models.vina_model.hbond(d)[source]

Computes hydrogen bond term.

deepchem.models.tf_new_models.vina_model.hydrophobic(d)[source]

Compute hydrophobic interaction term.

deepchem.models.tf_new_models.vina_model.put_atoms_in_cells(coords, cells, N, n_cells, ndim, k=5)[source]

Place each atom into cells. O(N) runtime.

Let N be the number of atoms.

Parameters:
  • coords (tf.Tensor) – (N, 3) shape.
  • cells (tf.Tensor) – (n_cells, ndim) shape.
  • N (int) – Number atoms
  • ndim (int) – Dimensionality of input space
  • k (int) – Number of nearest neighbors.
Returns:

closest_atoms – Of shape (n_cells, k, ndim)

Return type:

tf.Tensor

deepchem.models.tf_new_models.vina_model.repulsion(d)[source]

Computes repulsion interaction term.

Module contents