Coursera Machine Learning 第十周 quiz Large Scale Machine Learning
2017-09-23 23:17
906 查看
返回
Try averaging the cost over a larger number of examples (say 1000 examples instead of 500) in the plot.
Try using a larger learning rate α.
This is not an issue, as we expect this to occur with stochastic gradient descent.
Try using a smaller learning rate α.
One of the advantages of stochastic gradient descent is that it uses parallelization and thus runs much faster than batch gradient descent.
Before running stochastic gradient descent, you should randomly shuffle (reorder) the training set.
In order to make sure stochastic gradient descent is converging, we typically compute Jtrain(θ) after
each iteration (and plot it) in order to make sure that the cost function is generally decreasing.
If you have a huge training set, then stochastic gradient descent may be much faster than batch gradient descent.
One of the advantages of online learning is that if the function we're modeling changes over time (such as if we are modeling the probability of users clicking on different URLs, and user tastes/preferences are changing over time), the online learning
algorithm will automatically adapt to these changes.
Online learning algorithms are usually best suited to problems were we have a continuous/non-stop stream of data that we want to learn from.
Online learning algorithms are most appropriate when we have a fixed training set of size m that
we want to train on.
When using online learning, you must save every new training example you get, as you will need to reuse past examples to re-train the model even after you get new training examples in the future.
Computing the average of all the features in your training set μ=1m∑mi=1x(i)(say
in order to perform mean normalization).
Logistic regression trained using batch gradient descent.
Logistic regression trained using stochastic gradient descent.
Linear regression trained using stochastic gradient descent.
When using map-reduce with gradient descent, we usually use a single machine that accumulates the gradients from each of the map-reduce machines, in order to compute the parameter update for that iteration.
Because of network latency and other overhead associated with map-reduce, if we run map-reduce using N computers,
we might get less than an N-fold
speedup compared to using 1 computer.
If we run map-reduce using N computers,
then we will always get at least an N-fold
speedup compared to using 1 computer.
If you have only 1 computer with 1 computing core, then map-reduce is unlikely to help.
5题我选的是BD 是错误的
返回 5 个问题
Try averaging the cost over a larger number of examples (say 1000 examples instead of 500) in the plot.
Try using a larger learning rate α.
This is not an issue, as we expect this to occur with stochastic gradient descent.
Try using a smaller learning rate α.
One of the advantages of stochastic gradient descent is that it uses parallelization and thus runs much faster than batch gradient descent.
Before running stochastic gradient descent, you should randomly shuffle (reorder) the training set.
In order to make sure stochastic gradient descent is converging, we typically compute Jtrain(θ) after
each iteration (and plot it) in order to make sure that the cost function is generally decreasing.
If you have a huge training set, then stochastic gradient descent may be much faster than batch gradient descent.
One of the advantages of online learning is that if the function we're modeling changes over time (such as if we are modeling the probability of users clicking on different URLs, and user tastes/preferences are changing over time), the online learning
algorithm will automatically adapt to these changes.
Online learning algorithms are usually best suited to problems were we have a continuous/non-stop stream of data that we want to learn from.
Online learning algorithms are most appropriate when we have a fixed training set of size m that
we want to train on.
When using online learning, you must save every new training example you get, as you will need to reuse past examples to re-train the model even after you get new training examples in the future.
Computing the average of all the features in your training set μ=1m∑mi=1x(i)(say
in order to perform mean normalization).
Logistic regression trained using batch gradient descent.
Logistic regression trained using stochastic gradient descent.
Linear regression trained using stochastic gradient descent.
When using map-reduce with gradient descent, we usually use a single machine that accumulates the gradients from each of the map-reduce machines, in order to compute the parameter update for that iteration.
Because of network latency and other overhead associated with map-reduce, if we run map-reduce using N computers,
we might get less than an N-fold
speedup compared to using 1 computer.
If we run map-reduce using N computers,
then we will always get at least an N-fold
speedup compared to using 1 computer.
If you have only 1 computer with 1 computing core, then map-reduce is unlikely to help.
我了解不是我自己完成的作业将永远不会通过该课程且我的 Coursera 帐号会被取消激活。
了解荣誉准则的更多信息
Submit Quiz
Large Scale Machine Learning
5 个问题Try averaging the cost over a larger number of examples (say 1000 examples instead of 500) in the plot.
Try using a larger learning rate α.
This is not an issue, as we expect this to occur with stochastic gradient descent.
Try using a smaller learning rate α.
One of the advantages of stochastic gradient descent is that it uses parallelization and thus runs much faster than batch gradient descent.
Before running stochastic gradient descent, you should randomly shuffle (reorder) the training set.
In order to make sure stochastic gradient descent is converging, we typically compute Jtrain(θ) after
each iteration (and plot it) in order to make sure that the cost function is generally decreasing.
If you have a huge training set, then stochastic gradient descent may be much faster than batch gradient descent.
One of the advantages of online learning is that if the function we're modeling changes over time (such as if we are modeling the probability of users clicking on different URLs, and user tastes/preferences are changing over time), the online learning
algorithm will automatically adapt to these changes.
Online learning algorithms are usually best suited to problems were we have a continuous/non-stop stream of data that we want to learn from.
Online learning algorithms are most appropriate when we have a fixed training set of size m that
we want to train on.
When using online learning, you must save every new training example you get, as you will need to reuse past examples to re-train the model even after you get new training examples in the future.
Computing the average of all the features in your training set μ=1m∑mi=1x(i)(say
in order to perform mean normalization).
Logistic regression trained using batch gradient descent.
Logistic regression trained using stochastic gradient descent.
Linear regression trained using stochastic gradient descent.
When using map-reduce with gradient descent, we usually use a single machine that accumulates the gradients from each of the map-reduce machines, in order to compute the parameter update for that iteration.
Because of network latency and other overhead associated with map-reduce, if we run map-reduce using N computers,
we might get less than an N-fold
speedup compared to using 1 computer.
If we run map-reduce using N computers,
then we will always get at least an N-fold
speedup compared to using 1 computer.
If you have only 1 computer with 1 computing core, then map-reduce is unlikely to help.
5题我选的是BD 是错误的
返回 5 个问题
Try averaging the cost over a larger number of examples (say 1000 examples instead of 500) in the plot.
Try using a larger learning rate α.
This is not an issue, as we expect this to occur with stochastic gradient descent.
Try using a smaller learning rate α.
One of the advantages of stochastic gradient descent is that it uses parallelization and thus runs much faster than batch gradient descent.
Before running stochastic gradient descent, you should randomly shuffle (reorder) the training set.
In order to make sure stochastic gradient descent is converging, we typically compute Jtrain(θ) after
each iteration (and plot it) in order to make sure that the cost function is generally decreasing.
If you have a huge training set, then stochastic gradient descent may be much faster than batch gradient descent.
One of the advantages of online learning is that if the function we're modeling changes over time (such as if we are modeling the probability of users clicking on different URLs, and user tastes/preferences are changing over time), the online learning
algorithm will automatically adapt to these changes.
Online learning algorithms are usually best suited to problems were we have a continuous/non-stop stream of data that we want to learn from.
Online learning algorithms are most appropriate when we have a fixed training set of size m that
we want to train on.
When using online learning, you must save every new training example you get, as you will need to reuse past examples to re-train the model even after you get new training examples in the future.
Computing the average of all the features in your training set μ=1m∑mi=1x(i)(say
in order to perform mean normalization).
Logistic regression trained using batch gradient descent.
Logistic regression trained using stochastic gradient descent.
Linear regression trained using stochastic gradient descent.
When using map-reduce with gradient descent, we usually use a single machine that accumulates the gradients from each of the map-reduce machines, in order to compute the parameter update for that iteration.
Because of network latency and other overhead associated with map-reduce, if we run map-reduce using N computers,
we might get less than an N-fold
speedup compared to using 1 computer.
If we run map-reduce using N computers,
then we will always get at least an N-fold
speedup compared to using 1 computer.
If you have only 1 computer with 1 computing core, then map-reduce is unlikely to help.
我了解不是我自己完成的作业将永远不会通过该课程且我的 Coursera 帐号会被取消激活。
了解荣誉准则的更多信息
Submit Quiz
相关文章推荐
- coursera Machine Learning 第十周 测验quiz答案解析Large Scale Machine Learning
- Coursera Machine Learning 第十周 quiz Large Scale Machine Learning
- 【学习笔记】【Coursera】【MachineLearning】Large scale machine learning
- Coursera机器学习-第十周-Large Scale Machine Learning
- Machine Learning week 10 quiz: Large Scale Machine Learning
- Week 10:Large Scale Machine Learning课后习题解答
- Ng机器学习 Week10 Large Scale Machine Learning
- Week 10:Large Scale Machine Learning课后习题解答
- coursera Machine Learning 第四周 测验quiz答案解析 Neural Networks: Representation
- coursera Machine Learning 第五周 测验quiz答案解析 Neural Networks: Learning
- Coursera Machine Learning 第五周 quiz Neural Networks: Learning
- Large scale machine learning in Python
- [Shogun] A large scale machine learning toolbox
- Lessons learned developing a practical large scale machine learning system
- Tensorflow: Large-scale machine learning on heterogeneous distributed systems[J]
- TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
- TensorFlow: A System for Large-Scale Machine Learning
- 机器学习(二十三)- Large Scale Machine Learning
- Coursera Machine Learning 第六周 quiz Machine Learning System Design
- Stanford Machine Learning: (6).Large Scale Machine Learning