# Using Splitters¶

In this tutorial we will have a look at the various splitters that are present in deepchem library and how each of them can be used.

import deepchem as dc
import pandas as pd
import os


## Index Splitter¶

We start with the IndexSplitter. This splitter returns a range object which contains the split according to the fractions provided by the user. The three range objects can then be used to iterate over the dataset as test,valid and Train.

Each of the splitters that will be used has two functions inherited from the main class that are train_test_split which can be used to split the data into training and tesing data and the other fucnction is train_valid_test_split which is used to split the data to train, validation and test split.

Note: All the splitters have a default percentage of 80,10,10 as train, valid and test respectively. But can be changed by specifying the frac_train,frac_test and frac_valid in the ratio we want to split the data.

current_dir=os.path.dirname(os.path.realpath('__file__'))
input_data=os.path.join(current_dir,'../../deepchem/models/tests/example.csv')


We then featurize the data using any one of the featurizers present.

tasks=['log-solubility']
featurizer=dc.feat.CircularFingerprint(size=1024)

Loading raw samples now.
shard_size: 8192
Featurizing sample 0
TIMING: featurizing shard 0 took 0.029 s
TIMING: dataset construction took 0.042 s

from deepchem.splits.splitters import IndexSplitter

splitter=IndexSplitter()
train_data,valid_data,test_data=splitter.split(dataset)

train_data=[i for i in train_data]
valid_data=[i for i in valid_data]
test_data=[i for i in test_data]

len(train_data),len(valid_data),len(test_data)

(8, 1, 1)


As we can see that without providing the user specifications on how to split the data, the data was split into a default of 80,10,10.

But when we specify the parameters the dataset can be split according to our specificaitons.

train_data,valid_data,test_data=splitter.split(dataset,frac_train=0.7,frac_valid=0.2,frac_test=0.1)

train_data=[i for i in train_data]
valid_data=[i for i in valid_data]
test_data=[i for i in test_data]

len(train_data),len(valid_data),len(test_data)

(7, 2, 1)


## Specified Splitter¶

The next splitter that is present in the library is the specified splitter. This splitter needs a list from the dataset where it is specified which data is for training and which is for validation and testing.

from deepchem.splits.splitters import SpecifiedSplitter
current_dir=os.path.dirname(os.path.realpath('__file__'))
input_file=os.path.join('../../deepchem/models/tests/user_specified_example.csv')

featurizer=dc.feat.CircularFingerprint(size=1024)

split_field='split'

splitter=SpecifiedSplitter(input_file,split_field)

Loading raw samples now.
shard_size: 8192
Featurizing sample 0
TIMING: featurizing shard 0 took 0.033 s
TIMING: dataset construction took 0.040 s

train_data,valid_data,test_data=splitter.split(dataset)


When we split the data using the specified splitter it compares the data in each row of the split_field which the user has to specify wether the given row should be used as training data, validation data or testing data. The user has to specify as train,test and valid in the split_field. Note: The input is case insensitive.

train_data,valid_data,test_data

([0, 1, 2, 3, 4, 5], [6, 7], [8, 9])


## Indice Splitter¶

Another splitter present in the fraework is IndiceSplitter. This splitter takes an input of valid_indices and test_indices which are lists with the indices of validation data and test data in the dataset respectively.

from deepchem.splits.splitters import IndiceSplitter

splitter=IndiceSplitter(valid_indices=[7],test_indices=[9])

splitter.split(dataset)

([0, 1, 2, 3, 4, 5, 6, 8], [7], [9])


## RandomGroupSplitter¶

The splitter which can be used to split the data on the basis of groupings is the RandomGroupSplitter. This splitter that splits on groupings.

An example use case is when there are multiple conformations of the same molecule that share the same topology.This splitter subsequently guarantees that resulting splits preserve groupings.

Note that it doesn’t do any dynamic programming or something fancy to try to maximize the choice such that frac_train, frac_valid, or frac_test is maximized.It simply permutes the groups themselves. As such, use with caution if the number of elements per group varies significantly.

The parameter that needs to be provided with the splitter is groups. This is an array like list of hashables which is the same as the size of the dataset.

from deepchem.splits.splitters import RandomGroupSplitter

groups = [0, 4, 1, 2, 3, 7, 0, 3, 1, 0]

splitter=RandomGroupSplitter(groups=groups)

train_idxs, valid_idxs, test_idxs = splitter.split(
solubility_dataset)

Loading raw samples now.
shard_size: 8192
Featurizing sample 0
TIMING: featurizing shard 0 took 0.030 s
TIMING: dataset construction took 0.040 s

train_idxs,valid_idxs,test_idxs

([5, 1, 2, 8, 0, 6, 9], [4, 7], [3])

train_data=[]
for i in range(len(train_idxs)):
train_data.append(groups[train_idxs[i]])

valid_data=[]
for i in range(len(valid_idxs)):
valid_data.append(groups[valid_idxs[i]])

test_data=[]
for i in range(len(test_idxs)):
test_data.append(groups[test_idxs[i]])

print("Groups present in the training data =",train_data)
print("Groups present in the validation data = ",valid_data)
print("Groups present in the testing data = ", test_data)

Groups present in the training data = [7, 4, 1, 1, 0, 0, 0]
Groups present in the validation data =  [3, 3]
Groups present in the testing data =  [2]


So the RandomGroupSplitter when properly assigned the groups, splits the data accordingly and preserves the groupings.

## Scaffold Splitter¶

The ScaffoldSplitter splits the data based on the scaffold of small molecules. The splitter takes the data and generates scaffolds using the smiles in the data. Then the splitter sorts the data into scaffold sets.

from deepchem.splits.splitters import ScaffoldSplitter

splitter=ScaffoldSplitter()

Loading raw samples now.
shard_size: 8192

train_data,valid_data,test_data = splitter.split(solubility_dataset,frac_train=0.7,frac_valid=0.2,frac_test=0.1)

len(train_data),len(valid_data),len(test_data)

(7, 2, 1)