resunet
The ResUNet.
Decoder
Code credit: https://github.com/jlcsilva/segmentation_models.pytorch
Model
Code credit: https://github.com/jlcsilva/segmentation_models.pytorch
kelp.nn.models.resunet.model.ResUnet
Bases: SegmentationModel
ResUnet is a fully-convolution neural network for image semantic segmentation. Consist of encoder and decoder parts connected with skip connections. Encoder extract features of different spatial resolution (skip connections) which are used by decoder to define accurate segmentation mask. Use concatenation for fusing decoder blocks with skip connections. Use residual connections inside each decoder block.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
encoder_name |
str
|
Name of the classification model that will be used as an encoder (a.k.a backbone) to extract features of different spatial resolution |
'resnet34'
|
encoder_depth |
int
|
A number of stages used in encoder in range [3, 5]. Each stage generate features two times smaller in spatial dimensions than previous one (e.g. for depth 0 we will have features with shapes [(N, C, H, W),], for depth 1 - [(N, C, H, W), (N, C, H // 2, W // 2)] and so on). Default is 5 |
5
|
encoder_weights |
Optional[str]
|
One of None (random initialization), "imagenet" (pre-training on ImageNet) and other pretrained weights (see table with available weights for each encoder_name) |
'imagenet'
|
decoder_channels |
Optional[List[int]]
|
List of integers which specify in_channels parameter for convolutions used in decoder. Length of the list should be the same as encoder_depth |
None
|
decoder_use_batchnorm |
bool
|
If True, BatchNorm2d layer between Conv2D and Activation layers is used. If "inplace" InplaceABN will be used, allows to decrease memory consumption. Available options are True, False, "inplace" |
True
|
decoder_attention_type |
Optional[str]
|
Attention module used in decoder of the model. Available options are None and scse (https://arxiv.org/abs/1808.08127). |
None
|
in_channels |
int
|
A number of input channels for the model, default is 3 (RGB images) |
3
|
classes |
int
|
A number of classes for output mask (or you can think as a number of channels of output mask) |
1
|
activation |
Optional[Union[str, Callable[[Any], Any]]]
|
An activation function to apply after the final convolution layer. Available options are "sigmoid", "softmax", "logsoftmax", "tanh", "identity", callable and None. Default is None |
None
|
aux_params |
Optional[Dict[str, Any]]
|
Dictionary with parameters of the auxiliary output (classification head). Auxiliary output is build on top of encoder if aux_params is not None (default). Supported params: - classes (int): A number of classes - pooling (str): One of "max", "avg". Default is "avg" - dropout (float): Dropout factor in [0, 1) - activation (str): An activation function to apply "sigmoid"/"softmax" (could be None to return logits) |
None
|
Reference
Source code in kelp/nn/models/resunet/model.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
|