0%

1
2
3
4
5
6
if args.cuda:
images_test = Variable(images_test.cuda(), volatile=True)
targets_test = [Variable(ann.cuda(), volatile=True) for ann in targets_test]
else:
images_test = Variable(images_test, volatile=True)
targets_test = [Variable(ann, volatile=True) for ann in targets_test]

1
2
3
4
5
6
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"

torch.backends.cudnn.enabled = True
print(torch.cuda.device_count())
gpus = [0]

x 通常是relu的输出,不符合高斯分布
wx 比较符合高斯分布

感受野的大小大概就是stride的指数倍

NumpyTensor
repeatnp.repeat(a,16,axis=1)a.repeat(1,16)
tilenp.tile(a,(2,2))b = a.expand(1,4)? c = a.view(-1, 1).repeat(1, 16).view(10, -1)
transposeimg = np.transpose(img, (0, 3, 1, 2))a = a.permute(3,2,1,0)
expandnp.expand_dims(x, axis=0)torch.unsqueeze(x, 1)

1
2
3
4
5
np.repeat([1,2], 4)
array([1, 1, 1, 1, 2, 2, 2, 2])

np.tile([1,2], 4)
array([1, 2, 1, 2, 1, 2, 1, 2])