Usage Example torch

torch - example code [link to example]

Note

This code was tested with Pytorch v0.4.1

102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
        datasets.MNIST('../data', train=False, transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=args.test_batch_size, shuffle=True, **kwargs)

    # define dataset loaders
    train_log_loader = torch.utils.data.DataLoader(
        datasets.MNIST('../data', train=True, download=True,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=args.test_batch_size, shuffle=False, **kwargs)

    test_log_loader = torch.utils.data.DataLoader(
        datasets.MNIST('../data', train=False, transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=args.test_batch_size, shuffle=False, **kwargs)

    model = Net().to(device)
    optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)

    # specify folds
    train_fold = Fold(
        data=train_log_loader,
        foldId="mnist_train",
        dataset_config="mnist.yml"
    )
    test_fold = Fold(
        data=test_log_loader,
        foldId="mnist_test",
        dataset_config="mnist.yml"
    )

    # create new run
    run = Run(
        runId="example_logs_torch",
        folds=[train_fold, test_fold],
        trainfoldId="mnist_train",
    )

    for epoch in range(1, args.epochs + 1):
        # log run every epoch
        log_epoch(run, model, device, epoch, numclass=10)

        train(args, model, device, train_loader, optimizer, epoch)
        test(args, model, device, test_loader)

    # export logs
    run.export(logdir="logs")


if __name__ == '__main__':
    main()

In the PyTorch version we wrap the Dataloader objects for both the train- and test-fold in a Fold object and provide a unique identifier which references the additional metadata specified in the dataset_config (see Dataset Configuration).

We then create a Run object with a unique identifier for the experiment that is run, the list of folds that should be tracked as well as the fold identifier trainfoldId of the fold that’s used during training.

After that we can pass the Run object along with the model, device, epoch and number of classes to a utility function log_epoch which will automatically log the performance of the current model on the specified folds. After the training completed we can export the results to a directory logdir.