rztdl.dl.helpers package

Submodules

rztdl.dl.helpers.activations module

@created on: 17/12/19,
@author: Umesh Kumar,
@version: v0.0.1

Description:

Sphinx Documentation Status: Complete

class rztdl.dl.helpers.activations.Activation(activation, **kwargs)[source]

Bases: tensorflow.python.keras.layers.core.Activation

classmethod blueprint()[source]
classmethod blueprint_properties()[source]
class rztdl.dl.helpers.activations.Elu(alpha: float = 1.0)[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Elu activation on given input

The exponential linear activation: x if x > 0 and alpha * (exp(x)-1) if x < 0.

Parameters:alpha (float) – A scalar, slope of negative section.
class rztdl.dl.helpers.activations.Exponential[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Exponential activation on given input

The Exponential activation: exp(x)

class rztdl.dl.helpers.activations.HardSigmoid[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Hard Sigmoid activation on given input

Hard sigmoid activation: 0 if x < -2.5
1 if x > 2.5 0.2 * x + 0.5 if -2.5 <= x <= 2.5.
class rztdl.dl.helpers.activations.Linear[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Linear activation on given input

The Linear activation: linear(x) = x

class rztdl.dl.helpers.activations.Relu(alpha: float = 0.0, max_value: float = None, threshold: int = 0)[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Relu activation on given input

Relu Activation : f(x) = max_value for x >= max_value,
f(x) = x for threshold <= x < max_value, f(x) = alpha * (x - threshold) otherwise
Parameters:
  • alpha (float) – A scalar, slope of negative section
  • max_value (Optional[float]) – float. Saturation threshold.
  • threshold (int) – float. Threshold value for thresholded activation.
class rztdl.dl.helpers.activations.Selu[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Selu activation on given input

Selu Activation : scale * x if x > 0 and scale * alpha * (exp(x) - 1) if x < 0
scale = 1.05070098 alpha = 1.67326324
class rztdl.dl.helpers.activations.Sigmoid[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Sigmoid activation on given input

The sigmoid activation: (1.0 / (1.0 + exp(-x))).

class rztdl.dl.helpers.activations.Softmax(axis: int = -1)[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Softmax activation on given input

The softmax activation: exp(x)/tf.reduce_sum(exp(x)).

Parameters:axis (int) – Integer, axis along which the softmax normalization is applied.
class rztdl.dl.helpers.activations.Softplus[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Softplus activation on given input

The softplus activation: log(exp(x) + 1).

class rztdl.dl.helpers.activations.Softsign[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Softsign activation on given input

The softsign activation: x / (abs(x) + 1).

class rztdl.dl.helpers.activations.Tanh[source]

Bases: rztdl.dl.helpers.activations.Activation

Apply Tanh activation on given input

The tanh activation: tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x))).

rztdl.dl.helpers.constraints module

@created on: 24/01/20,
@author: Umesh Kumar,
@version: v3.0.0

Description:

Sphinx Documentation Status: Complete

class rztdl.dl.helpers.constraints.Constraint[source]

Bases: tensorflow.python.keras.constraints.Constraint

Base class for constraints

classmethod blueprint()[source]
classmethod blueprint_properties()[source]
check_dimension(dimensions, **kwargs)[source]
validate(dimensions, **kwargs)[source]
class rztdl.dl.helpers.constraints.MaxNorm(max_value: float = 2, axis: int = 0)[source]

Bases: tensorflow.python.keras.constraints.MaxNorm, rztdl.dl.helpers.constraints.Constraint

Constrains the weights incident to each hidden unit to have a norm less than or equal to a desired value

Parameters:
  • max_value (float) – Max value
  • axis (int) – axis along which to calculate weight norms
validate(dimensions, **kwargs)[source]
class rztdl.dl.helpers.constraints.MinMaxNorm(min_value: float = 0.0, max_value: float = 1.0, rate: float = 1.0, axis: int = 0)[source]

Bases: tensorflow.python.keras.constraints.MinMaxNorm, rztdl.dl.helpers.constraints.Constraint

Constrains the weights incident to each hidden unit to have the norm between a lower bound and an upper bound

Parameters:
  • max_value (float) – The minimum norm for the incoming weights.
  • max_value – The maximum norm for the incoming weights.
  • rate (float) – Rate for enforcing the constraint, new weight= (1-rate) * norm + rate * norm.clip(min_val,max_val)`
  • axis (int) – Axis along which to calculate weight norms
validate(dimensions, **kwargs)[source]
class rztdl.dl.helpers.constraints.NonNeg[source]

Bases: tensorflow.python.keras.constraints.NonNeg, rztdl.dl.helpers.constraints.Constraint

Constrains the weights to be non-negative

validate(dimensions, **kwargs)[source]
class rztdl.dl.helpers.constraints.UnitNorm(axis: int = 0)[source]

Bases: tensorflow.python.keras.constraints.UnitNorm, rztdl.dl.helpers.constraints.Constraint

Constrains the weights incident to each hidden unit to have unit norm.

Parameters:axis (int) – Axis along which to calculate weight norms
validate(dimensions, **kwargs)[source]

rztdl.dl.helpers.initializers module

@created on: 16/12/19,
@author: Umesh Kumar,
@version: v0.0.1

Description:

Sphinx Documentation Status: Complete

class rztdl.dl.helpers.initializers.Initializer[source]

Bases: tensorflow.python.ops.init_ops_v2.Initializer

classmethod blueprint()[source]
classmethod blueprint_properties()[source]
class rztdl.dl.helpers.initializers.Ones[source]

Bases: tensorflow.python.ops.init_ops_v2.Ones, rztdl.dl.helpers.initializers.Initializer

Initializer that generates tensors initialized to 1

class rztdl.dl.helpers.initializers.Zeros[source]

Bases: tensorflow.python.ops.init_ops_v2.Zeros, rztdl.dl.helpers.initializers.Initializer

Initializer that generates tensors initialized to o.

class rztdl.dl.helpers.initializers.RandomUniform(min_val: float = -0.05, max_val: float = 0.05, seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.RandomUniform, rztdl.dl.helpers.initializers.Initializer

Initializer that generates tensors with a uniform distribution

Parameters:
  • min_val (float) – Minimum Value
  • max_val (float) – Max Value
  • seed (Optional[int]) – Integer Value Used to create random seeds
class rztdl.dl.helpers.initializers.RandomNormal(mean: float = 0.0, stddev: float = 0.05, seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.RandomNormal, rztdl.dl.helpers.initializers.Initializer

Initializer that generates tensors with a normal distribution

Parameters:
  • mean (float) – Mean
  • stddev (float) – Std dev
  • seed (Optional[int]) – Seed Value
class rztdl.dl.helpers.initializers.Constant(value: float = 0.0)[source]

Bases: tensorflow.python.ops.init_ops_v2.Constant, rztdl.dl.helpers.initializers.Initializer

Initializer that generates tensors with a constant

Parameters:value (float) – Constant Value
class rztdl.dl.helpers.initializers.LecunNormal(seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.VarianceScaling, rztdl.dl.helpers.initializers.Initializer

LeCun normal initializer

Parameters:seed (Optional[int]) – Seed Value
class rztdl.dl.helpers.initializers.LecunUniform(seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.VarianceScaling, rztdl.dl.helpers.initializers.Initializer

LeCun uniform initializer

Parameters:seed (Optional[int]) – Seed Value
class rztdl.dl.helpers.initializers.TransferLearning(transfer_function: <built-in function callable> = None, path: str = None)[source]

Bases: rztdl.dl.helpers.initializers.Initializer

For transfer Learning

Either transfer_function/ path to numpy file should be passed and not both :type transfer_function: Optional[<built-in function callable>] :param transfer_function: Function which return a numpy array. This function will be evaluated only during

running flows
Parameters:path (Optional[str]) – Path of the numpy file to be read
class rztdl.dl.helpers.initializers.TruncatedNormal(mean: float = 0.0, stddev: float = 0.05, seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.TruncatedNormal, rztdl.dl.helpers.initializers.Initializer

Initializer that generates a truncated normal distribution

Parameters:
  • mean (float) – Mean
  • stddev (float) – Seed
  • seed (Optional[int]) – Seed Value
class rztdl.dl.helpers.initializers.Orthogonal(gain: float = 1.0, seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.Orthogonal, rztdl.dl.helpers.initializers.Initializer

Initializer that generates an orthogonal matrix

Parameters:
  • gain (float) – multiplicative factor to apply to the orthogonal matrix
  • seed (Optional[int]) – Seed value
class rztdl.dl.helpers.initializers.GlorotUniform(seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.GlorotUniform, rztdl.dl.helpers.initializers.Initializer

The Glorot normal initializer, also called Xavier Uniform initializer

Parameters:seed (Optional[int]) – Seed Value
class rztdl.dl.helpers.initializers.GlorotNormal(seed: int = None)[source]

Bases: tensorflow.python.ops.init_ops_v2.GlorotNormal, rztdl.dl.helpers.initializers.Initializer

The Glorot normal initializer, also called Xavier normal initializer

Parameters:seed (Optional[int]) – Seed Value

rztdl.dl.helpers.normalizers module

class rztdl.dl.helpers.normalizers.L2Norm(axis: typing.List[int] = (1, ), epsilon: float = 1e-12, name: str = 'l2-norm')[source]

Bases: rztdl.dl.helpers.normalizers.Normalizer

Normalizes along dimension axis using an L2 norm

Parameters:
  • axis (List[int]) – Dimension along which to normalize. A vector of integers >0
  • epsilon (float) – Epsilon value for L2 norm to avoid division by zero
  • name (str) – Name optional
call(inputs, **kwargs)[source]
validate(inputs)[source]
class rztdl.dl.helpers.normalizers.LocalResponseNorm(depth_radius: int = 5, bias: float = 1, alpha: float = 1, beta: float = 0.5, name: str = 'lrn-norm')[source]

Bases: rztdl.dl.helpers.normalizers.Normalizer

Parameters:
  • depth_radius (int) – Half-width of the 1-D normalization window
  • bias (float) – An offset (usually positive to avoid dividing by 0).
  • alpha (float) – A scale factor, usually positive
  • beta (float) – An exponent.
  • name (str) – Name optional
call(inputs, **kwargs)[source]
validate(inputs)[source]
class rztdl.dl.helpers.normalizers.Normalizer(**kwargs)[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

classmethod blueprint()[source]
classmethod blueprint_properties()[source]

rztdl.dl.helpers.regularizers module

class rztdl.dl.helpers.regularizers.L1(l1: float = 0.01)[source]

Bases: tensorflow.python.keras.regularizers.L1L2, rztdl.dl.helpers.regularizers.Regularizer

L1 Regularization :type l1: float :param l1: L1 regularization factor

class rztdl.dl.helpers.regularizers.L1L2(l1: float = 0.0, l2: float = 0.0)[source]

Bases: tensorflow.python.keras.regularizers.L1L2, rztdl.dl.helpers.regularizers.Regularizer

L1L2 Regularization :type l1: float :param l1: L1 regularization factor :type l2: float :param l2: L2 regularization factor

class rztdl.dl.helpers.regularizers.L2(l2: float = 0.01)[source]

Bases: tensorflow.python.keras.regularizers.L1L2, rztdl.dl.helpers.regularizers.Regularizer

L2 Regularization :type l2: float :param l2: L2 regularization factor

class rztdl.dl.helpers.regularizers.Regularizer[source]

Bases: tensorflow.python.keras.regularizers.Regularizer

Regularizer base class.

classmethod blueprint()[source]
classmethod blueprint_properties()[source]

Module contents

@created on: 16/12/19,
@author: Umesh Kumar,
@version: v0.0.1

Description:

Sphinx Documentation Status: Complete