batch gradient method
2014-12-19 22:17
323 查看
#include #include #define M 100#define N 2#define Max_it 1500int main(void){ int i, j, count; double x[M], y[M], theta
, alpha; FILE *ifp; void stochastic(double *x, double *y, int m, double *theta, int n, double alpha, int tt); //read data from file count
= 0; if((ifp = fopen("ex1data1.txt", "r")) == NULL) { printf("open error, check!\n"); exit(1); } else { for(i=0;!feof(ifp);i++) { fscanf(ifp,"%lf,%lf", &x[i], &y[i]); count++; } } fclose(ifp); --count; for(j=0; j<N; j++) theta[j] = 0.0; alpha = 0.02; stochastic(x,
y, count,theta, N, alpha, Max_it); printf("The regression parameters are:\n"); for(i=0; i<N; i++) { printf("%10.2f \n", theta[i]); } exit(0);}//============================================================================================void stochastic(double
*x, double *y, int m, double *theta, int n, double alpha, int max_it){ int i, j, k, l; double sum1; for(k=0; k<max_it; k++)//for max iteration numbers { for(i=0; i<m; i++) { for(j=0; j<n; j++) { sum1 = 0.0; sum1 = (theta[0]+theta[1]*x[i]); if(j == 0) theta[j]
= theta[j] + alpha*(y[i] - sum1)/m; else theta[j] = theta[j] + alpha*(y[i] - sum1)*x[i]/m; } } }}
, alpha; FILE *ifp; void stochastic(double *x, double *y, int m, double *theta, int n, double alpha, int tt); //read data from file count
= 0; if((ifp = fopen("ex1data1.txt", "r")) == NULL) { printf("open error, check!\n"); exit(1); } else { for(i=0;!feof(ifp);i++) { fscanf(ifp,"%lf,%lf", &x[i], &y[i]); count++; } } fclose(ifp); --count; for(j=0; j<N; j++) theta[j] = 0.0; alpha = 0.02; stochastic(x,
y, count,theta, N, alpha, Max_it); printf("The regression parameters are:\n"); for(i=0; i<N; i++) { printf("%10.2f \n", theta[i]); } exit(0);}//============================================================================================void stochastic(double
*x, double *y, int m, double *theta, int n, double alpha, int max_it){ int i, j, k, l; double sum1; for(k=0; k<max_it; k++)//for max iteration numbers { for(i=0; i<m; i++) { for(j=0; j<n; j++) { sum1 = 0.0; sum1 = (theta[0]+theta[1]*x[i]); if(j == 0) theta[j]
= theta[j] + alpha*(y[i] - sum1)/m; else theta[j] = theta[j] + alpha*(y[i] - sum1)*x[i]/m; } } }}
相关文章推荐
- Batch Gradient Descent vs. Stochastic Gradient Descent
- batch gradient descent(批量梯度下降) 和 stochastic gradient descent(随机梯度下降)
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )的公式对比、实现对比
- conjugate gradient method (共轭梯度法)
- Least Squares Method & Gradient Descent
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )的公式对比、实现对比
- Accelerated Gradient Method for Multi-Task Sparse Learning Problem[2009]
- SPWeb.ProcessBatchData Method 的应用
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )的公式对比、实现对比
- FITTING A MODEL VIA CLOSED-FORM EQUATIONS VS. GRADIENT DESCENT VS STOCHASTIC GRADIENT DESCENT VS MINI-BATCH LEARNING. WHAT IS THE DIFFERENCE?
- [first order method] Proximal Gradient Descent
- batch&stochasic gradient descent
- [Machine Learning] 梯度下降(BGD)、随机梯度下降(SGD)、Mini-batch Gradient Descent、带Mini-batch的SGD
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )
- 梯度下降(BGD)、随机梯度下降(SGD)、Mini-batch Gradient Descent、带Mini-batch的SGD
- 随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )的公式对比、实现对比
- comparison of direct search and conjugate gradient method
- [Machine Learning]随机梯度下降(Stochastic gradient descent)和 批量梯度下降(Batch gradient descent )的对比
- BGD(Batch Gradient Descent), SGD (Stochastic Gradient Descent), MBGD (Mini-Batch Gradient Descent)