efficientunet++
The EfficientUnet++.
Decoder
Code credit: https://github.com/jlcsilva/segmentation_models.pytorch
kelp.nn.models.efficientunetplusplus.decoder.EfficientUnetPlusPlusDecoder
Bases: Module
EfficientUnet++ Decoder.
Source code in kelp/nn/models/efficientunetplusplus/decoder.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
|
kelp.nn.models.efficientunetplusplus.decoder.InvertedResidual
Bases: Module
Inverted bottleneck residual block with an scSE block embedded into the residual layer, after the depth-wise convolution. By default, uses batch normalization and Hardswish activation.
Source code in kelp/nn/models/efficientunetplusplus/decoder.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
Model
Code credit: https://github.com/jlcsilva/segmentation_models.pytorch
kelp.nn.models.efficientunetplusplus.model.EfficientUnetPlusPlus
Bases: SegmentationModel
The EfficientUNet++ is a fully convolutional neural network for ordinary and medical image semantic segmentation. Consists of an encoder and a decoder, connected by skip connections. The encoder extracts features of different spatial resolutions, which are fed to the decoder through skip connections. The decoder combines its own feature maps with the ones from skip connections to produce accurate segmentations masks. The EfficientUNet++ decoder architecture is based on the UNet++, a model composed of nested U-Net-like decoder sub-networks. To increase performance and computational efficiency, the EfficientUNet++ replaces the UNet++'s blocks with inverted residual blocks with depthwise convolutions and embedded spatial and channel attention mechanisms. Synergizes well with EfficientNet encoders. Due to their efficient visual representations (i.e., using few channels to represent extracted features), EfficientNet encoders require few computation from the decoder.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
encoder_name |
str
|
Name of the classification model that will be used as an encoder (a.k.a backbone) to extract features of different spatial resolution |
'timm-efficientnet-b0'
|
encoder_depth |
int
|
A number of stages used in encoder in range [3, 5]. Each stage generate features two times smaller in spatial dimensions than previous one (e.g. for depth 0 we will have features with shapes [(N, C, H, W),], for depth 1 - [(N, C, H, W), (N, C, H // 2, W // 2)] and so on). Default is 5 |
5
|
encoder_weights |
Optional[str]
|
One of None (random initialization), "imagenet" (pre-training on ImageNet) and other pretrained weights (see table with available weights for each encoder_name) |
'imagenet'
|
decoder_channels |
Optional[List[int]]
|
List of integers which specify in_channels parameter for convolutions used in decoder. Length of the list should be the same as encoder_depth |
None
|
in_channels |
int
|
A number of input channels for the model, default is 3 (RGB images) |
3
|
classes |
int
|
A number of classes for output mask (or you can think as a number of channels of output mask) |
1
|
activation |
Optional[Union[str, Callable[[Any], Any]]]
|
An activation function to apply after the final convolution layer. Available options are "sigmoid", "softmax", "logsoftmax", "tanh", "identity", callable and None. Default is None |
None
|
aux_params |
Optional[Dict[str, Any]]
|
Dictionary with parameters of the auxiliary output (classification head). Auxiliary output is build on top of encoder if aux_params is not None (default). Supported params: - classes (int): A number of classes - pooling (str): One of "max", "avg". Default is "avg" - dropout (float): Dropout factor in [0, 1) - activation (str): An activation function to apply "sigmoid"/"softmax" (could be None to return logits) |
None
|
Reference
Source code in kelp/nn/models/efficientunetplusplus/model.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|