您的位置:首页 > 其它

Machine Learning Foundations - week2 key point

2017-09-02 10:48 169 查看

1.  when can machines learn?

1.2 Learning to answer yes/no

PLAA takes linear separable D and perceptrons H to get hypothesisg

unknown target function (f: x --> y)

training examplesD: (x1, y1), ···, (xn, yn)   --->  learning algorithm A  --->  final hypothesis g ≈ f

                                                         (hypothesis set H, H = all possible perceptrons)

Perceptron:

A Simple Hypothesis Set: the ‘Perceptron’: called ‘perceptron’ hypothesis historically

Vector Form of Perceptron Hypothesis: h(x) = sign(w^Tx)

Perceptrons in R^2: perceptrons ⇔ linear (binary) classifiers

Perceptron Learning Algorithm (PLA):

start from some w0 (say, 0), and ‘correct’ its mistakes on D

Linear Separability:

linear separable D ⇔  PLA halts (i.e. no more mistakes)⇔ exists perfect wf such
that yn = sign(wf^T xn)

<
4000
/p>
More about PLA:

As long as linear separable and correct by mistake
• inner product of wf and wt grows fast; length of wt grows slowly (T
<=  (R/p)^2)

• PLA ‘lines’ are more and more aligned with wf ⇒ halts

Learning with Noisy Data --> Pocket Algorithm:

modify PLA algorithm (black lines) by keeping best weights in pocket
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: