0%

1
2
3
4
5
6
7
8
9
10
11
12
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('config', metavar='CFG',
help='path to configuration')
parser.add_argument('-t', '--threads', type=int, default=4,
help="how many GPUs are avaliable on this machine")
parser.add_argument('-num', '--nums_epoch', default=200, type=int,
help='nums of epochs')
parser.add_argument('-save_dir', '--save_dir', default="tmp", type=str,
help='save_dir')
parser.add_argument('-gpus', '--gpus', nargs='+', type=int,
help='--gpus 1234 2345 3456 4567 ')

定位参数 ‘config’

不用加-, 是普通的参数列表

可选参数 ‘-t’/‘–threads’

需要加-, 需要加参数名字的参数列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import torch
a = torch.tensor(0.0)
b = torch.tensor(0.0)

a.requires_grad = True
b.requires_grad = True

cosa = torch.cos(a)
cosb = torch.cos(b)

y1 = cosa + cosb
y2 = torch.tensor([
cosa, cosb
])

y2.sum().backward() # RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

print("a has grad ", a.requires_grad)
print("a grad", a.grad)

要应该拼接起来

1
2
3
y2 = torch.cat([
cosa, cosb
], dim=0)

“—
title: 人脸识别的loss
date: 2018-08-16 17:42:27
tags:
—“

1. 原有softmax

hardmax

![](https://ws2.sinaimg.cn/large/0069RVTdly1fubomqa5y3j30a0066glx.jpg)

softmax

![](https://ws1.sinaimg.cn/large/0069RVTdly1fubomz08qkj308k06ajrq.jpg)

区别

  1. soft max比hard max更容易达到终极目标one-hot形式, 因为它大的会被放大

  2. softmax鼓励不同类别的特征分开,但并不鼓励特征分离很多

  3. (5,1,1,1)时loss就已经很小了,此时CNN接近收敛梯度不再下降

2.人脸识别需要类内尽可能紧凑,类间尽可能分离

“—
title: Hinton Neural Network
date: 2018-08-15 22:28:22
tags:
—“

1a

1
2
out[out > 0] = 1
out[out <= 0] = -1

As your function’s derivative is 0 everywhere (except for the input 0, where it isn’t smooth), you can also implement it as a function:

pytorch 在backward的时候会删除计算图,
所以在backward的时候要加入参数 loss.backward(retain_graph=True)

1
2
echo 'one two three' | xargs -t rm
cat foo.txt | xargs -I % sh -c 'echo %; mkdir %'