xenonpy.model package

Subpackages

Submodules

xenonpy.model.cgcnn module

class xenonpy.model.cgcnn.ConvLayer(atom_fea_len, nbr_fea_len)[source]

Bases: Module

Convolutional operation on graphs

Initialize ConvLayer.

Parameters:
  • atom_fea_len (int) – Number of atom hidden features.

  • nbr_fea_len (int) – Number of bond features.

forward(atom_in_fea, nbr_fea, nbr_fea_idx)[source]

Forward pass

N: Total number of atoms in the batch M: Max number of neighbors

Parameters:
  • atom_in_fea (Variable(torch.Tensor) shape (N, atom_fea_len)) – Atom hidden features before convolution

  • nbr_fea (Variable(torch.Tensor) shape (N, M, nbr_fea_len)) – Bond features of each atom’s M neighbors

  • nbr_fea_idx (torch.LongTensor shape (N, M)) – Indices of M neighbors of each atom

Returns:

  • atom_out_fea (nn.Variable shape (N, atom_fea_len)) – Atom hidden features after convolution

training: bool
class xenonpy.model.cgcnn.CrystalGraphConvNet(orig_atom_fea_len, nbr_fea_len, atom_fea_len=64, n_conv=3, h_fea_len=128, n_h=1, classification=False)[source]

Bases: Module

Create a crystal graph convolutional neural network for predicting total material properties.

See Also: [CGCNN].

Initialize CrystalGraphConvNet.

Parameters:
  • orig_atom_fea_len (int) – Number of atom features in the input.

  • nbr_fea_len (int) – Number of bond features.

  • atom_fea_len (int) – Number of hidden atom features in the convolutional layers

  • n_conv (int) – Number of convolutional layers

  • h_fea_len (int) – Number of hidden features after pooling

  • n_h (int) – Number of hidden layers after pooling

forward(atom_fea, nbr_fea, nbr_fea_idx, crystal_atom_idx)[source]

Forward pass

N: Total number of atoms in the batch M: Max number of neighbors N0: Total number of crystals in the batch

Parameters:
  • atom_fea (Variable(torch.Tensor) shape (N, orig_atom_fea_len)) – Atom features from atom type

  • nbr_fea (Variable(torch.Tensor) shape (N, M, nbr_fea_len)) – Bond features of each atom’s M neighbors

  • nbr_fea_idx (torch.LongTensor shape (N, M)) – Indices of M neighbors of each atom

  • crystal_atom_idx (list of torch.LongTensor of length N0) – Mapping from the crystal idx to atom idx

Returns:

  • prediction (nn.Variable shape (N, )) – Atom hidden features after convolution

static pooling(atom_fea, crystal_atom_idx)[source]

Pooling the atom features to crystal features

N: Total number of atoms in the batch N0: Total number of crystals in the batch

Parameters:
  • atom_fea (Variable(torch.Tensor) shape (N, atom_fea_len)) – Atom feature vectors of the batch

  • crystal_atom_idx (list of torch.LongTensor of length N0) – Mapping from the crystal idx to atom idx

training: bool

xenonpy.model.extern module

This module wrapped external model tool convenient.

xenonpy.model.sequential module

class xenonpy.model.sequential.LinearLayer(in_features, out_features, *, bias=True, dropout=0.0, activation_func=ReLU(), normalizer=0.1)[source]

Bases: Module

Base NN layer. This is a wrap around PyTorch. See here for details: http://pytorch.org/docs/master/nn.html#

Parameters:
  • in_features (int) – Size of each input sample.

  • out_features (int) – Size of each output sample

  • bias (bool) – If set to False, the layer will not learn an additive bias. Default: True

  • dropout (float) – Probability of an element to be zeroed. Default: 0.5

  • activation_func (func) – Activation function.

  • normalizer (func) – Normalization layers

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class xenonpy.model.sequential.SequentialLinear(in_features, out_features, bias=True, *, h_neurons=(), h_bias=True, h_dropouts=0.0, h_normalizers=0.1, h_activation_funcs=ReLU())[source]

Bases: Module

Sequential model with linear layers and configurable other hype-parameters. e.g. dropout, hidden layers

Parameters:
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses. :rtype: Any

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

Module contents