Common use of Classifiers Clause in Contracts

Classifiers. All classifiers used the same architecture (Fig. 2) and hyperparameters. The input size to the classifiers were 21x60 (channels x time). Each classifier started with five convolutional layers; each layer was followed by a scaled-exponential rectifying unit (SELU) (▇▇▇▇▇▇▇▇▇ et al., 2017), and a max pooling layer which downsampled by a factor of two. Filter kernel sizes were three and the number of filters per layer were 32, 64, 128, 256, and 512. Channels were ▇▇▇- lyzed separately. Thereafter, four fully connected layers followed, the three first having 512 nodes. SELU activations was used for all layers except the last, which had one node and a sig- moid activation to complete the classifier. The total number of trainable parameters was 6,640,769.

Appears in 2 contracts

Sources: Pilot Study, Pilot Study