您的位置:首页 > 其它

Machine Learning Week 1

2015-10-09 19:37 190 查看

Mathine Learning Week1

Mathine Learning Week1
Classify
Supervised learning

Unsupervised learning

Liner Regression
Hypothesis Function

Cost Function

Gradient Descent

Classify

Supervised learning:

“right answers”given

i) Regression: Predict continuous valued outputs



ii) Classification: Discrete valued output(0 or 1 or even more)



Unsupervised learning

Clustering(Algorithm)



Liner Regression

Hypothesis Function

The Hypothesis Function:

hθ(x)=θ0+θ1x

in details:



the way of choosing parameters:



Cost Function

Cost Function:

J(θ0,θ1)=1<
4000
/span>2m∑i=1m(hθ(x(i)−y(i))2

3D cost function figures:



using contour figures to represent 3D plots:



Gradient Descent

Gradient Descent:

θj:=θj−α∂∂θjJ(θ0,θ1) for j=0 and j=1

Especially Gradient Descent for Linear Regression:

repeat until convergence:{θ0:=θ0−α1m∑i=1m(hθ(x(i))−y(i))

θ1:=θ1−α1m∑i=1m((hθ(x(i))−y(i))x(i))}

Simultaneous update the parameters:



Gradient descent with one variable:



Why learning rate shouldn’t be too big or small:



But you can keep the learning rate fixed with the steps automatically getting small:



with 2 parameters:



Liner regression’s cost function is always bowl shape:

So there are no local optima but only one global optimum

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: