keras: Why is the training loss much higher than the testing loss?
2017-10-04 11:52
435 查看
Why is the training loss much higher than the testing loss?
A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time.Besides, the training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss
for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.
相关文章推荐
- stackoverflow : Why C++ output is too much slower than C?
- Why is one loop so much slower than two loops?
- [!] The version of CocoaPods used to generate the lockfile (1.3.1) is higher than the version of the
- Why is the Asynchronous mode is better than the Synchronous one when handling clients requests?
- The version of CocoaPods used to generate the lockfile (1.2.0.beta.1) is higher than the version of
- Why AlloyFinger is so much smaller than hammerjs?
- stackoverflow : Why C++ output is too much slower than C?
- The version of CocoaPods used to generate the lockfile (1.2.1) is higher than the version of the cur
- Why iperf dual traffic in 100Mbps envrionment is much less than 100M?
- stackoverflow : Why C++ output is too much slower than C?
- Why is ARKit better than the alternatives?
- What is the GlobalSuppressions.cs/GlobalSuppressions.vb file and why is it needed
- 408. The fox knew too much, that is how he lost his tail. 机关算尽太聪明,反误了卿卿性命
- This version of the rendering library is more recent than your version of ADT ..解决办法
- why the application is so slow
- java.exe is valid, but is for a machine type other than the current machine
- This version of the rendering library is more recent than your version of ADT plug-in. Please update ADT plug-in问题
- The server time zone value '�й���ʱ��' is unrecognized or represents more than one time zone
- The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near
- This version of the rendering library is more recent than your version of ADT plug-in. Please updat