您的位置:首页 > 其它

Cost function of Logistic Regression and Neural Network

2018-03-12 22:25 483 查看

Logistic / Sigmoid function

g(x)=11+e−x=ex1+exg(x)=11+e−x=ex1+ex

Cost function

Logistic Regression

hθ(X)=f(X⊺θ)=P(y=1|X;θ)hθ(X)=f(X⊺θ)=P(y=1|X;θ)

令 z=X⊺θ,z=X⊺θ, 则

lnP(y=y|X;θ)ln⁡P(y=y|X;θ)

=ylnP(y=1|X;θ)+(1−y)lnP(y=0|X;θ)=yln⁡P(y=1|X;θ)+(1−y)ln⁡P(y=0|X;θ)

=ylnhθ(X)+(1−y)ln[1−hθ(X)]=yln⁡hθ(X)+(1−y)ln⁡[1−hθ(X)]

=ylng(z)+(1−y)ln[1−g(z)]=yln⁡g(z)+(1−y)ln⁡[1−g(z)]

因此 dlnP(y=y|X;θ)=ydlng(z)+(1−y)dln[1−g(z)]d⁡ln⁡P(y=y|X;θ)=yd⁡ln⁡g(z)+(1−y)d⁡ln⁡[1−g(z)]

=y⋅1g(z)g(z)[1−g(z)]dz+(1−y)11−g(z)(−1)g(z)[1−g(z)]dz=y⋅1g(z)g(z)[1−g(z)]d⁡z+(1−y)11−g(z)(−1)g(z)[1−g(z)]d⁡z

={y⋅[1−g(z)]−(1−y)g(z)}dz={y⋅[1−g(z)]−(1−y)g(z)}d⁡z

=[y−g(z)]dz=[y−g(z)]d⁡z

=[y−g(X⊺θ)]X⊺dθ=[y−g(X⊺θ)]X⊺d⁡θ

最大似然函数 L(θ)=ln[∏i=1mP(y=yi|Xi;θ)]=∑i=1mlnP(y=yi|Xi;θ)L⁡(θ)=ln⁡[∏i=1mP(y=yi|Xi;θ)]=∑i=1mln⁡P(y=yi|Xi;θ)

令 cost(θ)=−1mL(θ)=−1m∑i=1mlnP(y=yi|Xi;θ)cost⁡(θ)=−1mL⁡(θ)=−1m∑i=1mln⁡P(y=yi|Xi;θ)

=−1m∑i=1m{yilnhθ(Xi)+(1−yi)ln[1−hθ(Xi)]}=−1m∑i=1m{yiln⁡hθ(Xi)+(1−yi)ln⁡[1−hθ(Xi)]}

=−1m∑i=1m{yilng(zi)+(1−yi)ln[1−g(zi)]}=−1m∑i=1m{yiln⁡g(zi)+(1−yi)ln⁡[1−g(zi)]} ,其中 zi=X⊺iθzi=Xi⊺θ

则 maxL(θ)=−mmincost(θ)maxL⁡(θ)=−mmincost⁡(θ)

cost(θ)cost⁡(θ) 即为代价函数。

令 g(θ)=−L(θ)g(θ)=−L⁡(θ)

则 d[g(θ)]=−∑i=1m[yi−g(X⊺iθ)]X⊺idθd⁡[g(θ)]=−∑i=1m[yi−g(Xi⊺θ)]Xi⊺d⁡θ

=∑i=1m[g(X⊺iθ)−yi]X⊺idθ=∑i=1m[g(Xi⊺θ)−yi]Xi⊺d⁡θ

因此 ∇[g(θ)]=∑i=1m[g(X⊺iθ)−yi]Xi∇[g(θ)]=∑i=1m[g(Xi⊺θ)−yi]Xi

=X⊺[g(X⊺θ)−y]=X⊺[g(X⊺θ)−y]

其中 X=⎛⎝⎜⎜X⊺1⋮X⊺m⎞⎠⎟⎟,y=⎛⎝⎜⎜y⊺1⋮y⊺m⎞⎠⎟⎟,g(X⊺θ)=⎛⎝⎜⎜g(X⊺1θ)⋮g(X⊺mθ)⎞⎠⎟⎟,X=(X1⊺⋮Xm⊺),y=(y1⊺⋮ym⊺),g(X⊺θ)=(g(X1⊺θ)⋮g(Xm⊺θ)),

则 d{∇[g(θ)]}=∑i=1md[g(X⊺iθ)]Xid⁡{∇[g(θ)]}=∑i=1md⁡[g(Xi⊺θ)]Xi

=∑i=1mg′(X⊺iθ)(X⊺idθ)Xi=∑i=1mg′(Xi⊺θ)(Xi⊺d⁡θ)Xi

=∑i=1mg′(X⊺iθ)XiX⊺idθ=∑i=1mg′(Xi⊺θ)XiXi⊺d⁡θ

因此 Hg(θ)=∑i=1mg′(X⊺iθ)XiX⊺iHg(θ)=∑i=1mg′(Xi⊺θ)XiXi⊺

∂∂θjg(θ)=∑i=1m[g(X⊺iθ)−yi]xij,j∈N,1≤j≤n∂∂θjg(θ)=∑i=1m[g(Xi⊺θ)−yi]xij,j∈N,1≤j≤n

Regularized Logistic Regression

cost(θ)=−1m∑i=1m{yilnhθ(Xi)+(1−yi)ln[1−hθ(Xi)]}+λ2n∑j=1nθ2jcost⁡(θ)=−1m∑i=1m{yiln⁡hθ(Xi)+(1−yi)ln⁡[1−hθ(Xi)]}+λ2n∑j=1nθj2



Hcost(θ)=∑i=1mg′(X⊺iθ)XiX⊺i+λ2n⎛⎝⎜⎜⎜⎜⎜01⋱1⎞⎠⎟⎟⎟⎟⎟Hcost⁡(θ)=∑i=1mg′(Xi⊺θ)XiXi⊺+λ2n(01⋱1)

性质

Hcost(θ)Hcost⁡(θ) 为正定矩阵。

证明

∀Z=⎛⎝⎜⎜z0⋮zn⎞⎠⎟⎟∈Rn+1,∀Z=(z0⋮zn)∈Rn+1,

Z⊺Hcost(θ)Z=∑i=1mg′(X⊺iθ)Z⊺XiX⊺iZ+λ2n∑j=1nz2jZ⊺Hcost⁡(θ)⁡Z=∑i=1mg′(Xi⊺θ)Z⊺XiXi⊺Z+λ2n∑j=1nzj2

=∑i=1mg′(X⊺iθ)(X⊺iZ)2+λ2n∑j=1nz2j≥0=∑i=1mg′(Xi⊺θ)(Xi⊺Z)2+λ2n∑j=1nzj2≥0

若 Z⊺Hcost(θ)Z=0,Z⊺Hcost⁡(θ)⁡Z=0, 则 ∀j∈N,1≤j≤n,zj=0,∀j∈N,1≤j≤n,zj=0,

于是 Z⊺Hcost(θ)Z=∑i=1mg′(X⊺iθ)z02=0⇒z0=0Z⊺Hcost⁡(θ)⁡Z=∑i=1mg′(Xi⊺θ)z02=0⇒z0=0

于是 Z=0Z=0

因此 Hcost(θ)Hcost⁡(θ) 为正定矩阵。

Neural Network for Classification

cost(θ)=−1m∑i=1m∑k=1K{yik(lnhθ(Xi))k+(1−yik)(ln[1−hθ(Xi)])k}cost⁡(θ)=−1m∑i=1m∑k=1K{yik(ln⁡hθ(Xi))k+(1−yik)(ln⁡[1−hθ(Xi)])k}

+λ2m∑l=1L−1∑i=1sl+1∑j=1slθ2lij+λ2m∑l=1L−1∑i=1sl+1∑j=1slθlij2
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐