Open Access   Article Go Back

A Review of Golden Days of Deep Learning Process

K Jyothi1 , K Sunitha2

Section:Review Paper, Product Type: Journal Paper
Volume-07 , Issue-06 , Page no. 144-149, Mar-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7si6.144149

Online published on Mar 20, 2019

Copyright © K Jyothi, K Sunitha . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: K Jyothi, K Sunitha, “A Review of Golden Days of Deep Learning Process,” International Journal of Computer Sciences and Engineering, Vol.07, Issue.06, pp.144-149, 2019.

MLA Style Citation: K Jyothi, K Sunitha "A Review of Golden Days of Deep Learning Process." International Journal of Computer Sciences and Engineering 07.06 (2019): 144-149.

APA Style Citation: K Jyothi, K Sunitha, (2019). A Review of Golden Days of Deep Learning Process. International Journal of Computer Sciences and Engineering, 07(06), 144-149.

BibTex Style Citation:
@article{Jyothi_2019,
author = {K Jyothi, K Sunitha},
title = {A Review of Golden Days of Deep Learning Process},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2019},
volume = {07},
Issue = {06},
month = {3},
year = {2019},
issn = {2347-2693},
pages = {144-149},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=887},
doi = {https://doi.org/10.26438/ijcse/v7i6.144149}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i6.144149}
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=887
TI - A Review of Golden Days of Deep Learning Process
T2 - International Journal of Computer Sciences and Engineering
AU - K Jyothi, K Sunitha
PY - 2019
DA - 2019/03/20
PB - IJCSE, Indore, INDIA
SP - 144-149
IS - 06
VL - 07
SN - 2347-2693
ER -

           

Abstract

The finish of Moore’s law and Dennard scaling has prompted the finish of fast enhancement all in all reason program execution. Machine learning (ML), and specifically profound learning is an appealing option for designers to investigate. It has as of late altered vision, discourse, dialect understanding, and numerous different fields, and it guarantees to help with the fabulous difficulties confronting our general public. The calculation at its center is low-accuracy straight variable based math. Accordingly, ML is both sufficiently wide to apply to numerous areas and sufficiently tight to profit by space explicit models, for example, Googles Tensor Processing Unit (TPU). In addition, the development sought after for ML registering surpasses Moore’s law at its pinnacle, similarly as it is blurring. Consequently, ML specialists and PC modelers must cooperate to plan the registering frameworks required to convey the capability of ML. This article offers inspiration, proposals, and alerts to PC draftsmen on the best way to best add to the ML insurgency.

Key-Words / Index Term

Machine Learning, Moore’s Law, Different Fields, Googles Tensor Processing Unit

References

[1]. A. Krizhevsky, I. Sutskever, G. Hinton, "lmageNet Classification with Deep Convolutional Neural Networks", Advances in Neural Information Processing Systems 25 (NIPS), 2012.
[2]. O. Russakovsky et al., "ImageNet Large Scale Visual Recognition Challenge", International Journal of Computer Vision, vol. 115, no. 3, pp. 211-252, 2015.
[3]. C. Szegedy et al., Going Deeper with Convolutions, 2014, [online] Available: https://arxiv.org/abs/1409.4842.
[4]. S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, 2015, [online] Available: https://arxiv.org/abs/1502.03167.
[5]. C. Szegedy et al., Rethinking the Inception Architecture for Computer Vision, 2015, [online] Available: https://arxiv.org/abs/1512.00567.
[6]. K. He et al., Deep Residual Learning for Image Recognition, 2015, [online] Available: https://arxiv.org/abs/1512.03385.
[7]. G. Hinton et al., "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups", IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
[8]. J. Clark, Google Turning Its Lucrative Web Search Over to AI Machines, Bloomberg Technology, 2015, [online] Available: www.bloomberg.com/news/articles/2015-10-26/google-turning-its-Iucrative-web-search-over-to-ai-machines.
[9]. Y. Wu et al., Google`s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016, [online] Available: https://arxiv.org/abs/1609.08144.
[10]. M. Johnson et al., Google`s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation, 2016, [online] Available: https://arxiv.org/abs/1611.04558.
[11]. V. Gulshan, L. Peng, M. Coram, "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs", JAMA, vol. 316, no. 22, pp. 2402-2410, 2016.
[12]. Y. Liu et al., Detecting Cancer Metastases on Gigapixel Pathology Images, 2017, [online] Available: https://arxiv.org/abs/1703.02442.
[13]. J. Olczak et al., "Artificial intelligence for analyzing orthopedic trauma radiographs", Acta Orthopaedica, 2017, [online] Available: www.tandfonline.com/doi/full/10.1080/17453674.2017.1344459.
[14]. D. Silver et al., "Mastering the game of Go with deep neural networks and tree search", Nature, vol. 529, pp. 484-489, 2016.
[15]. 14 Grand Challenges for Engineering in the 21st Century, [online] Available: www.engineeringchallenges.org/challenges.aspx.
[16]. N. Jouppi et al., "In-Datacenter Performance Analysis of a Tensor Processing Unit", Proc. 44th Annual International Symposium on Computer Architecture (ISCA), pp. 1-12, 2017.
[17]. Cloud TPUs: Google`s second-generation Tensor Processing Unit is coming to Cloud Google.ai, 2017, [online] Available: http://g.co/tpu.
[18]. M. Abadi et al., Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2016, [online] Available: https://arxiv.org/abs/1603.04467.
[19]. Y. Jia et al., "Caffe: Convolutional Architecture for Fast Feature Embedding", Proceedings of the 22nd ACM international conference on Multimedia (MM), pp. 675-678, 2014.
[20]. A. Paszke, S. Chintala, PyTorch, 2017, [online] Available: www.pytorch.org.
[21]. Machine Learning Glossary, Google, [online] Available: https://developers.google.com/machine-learning/glossary/#batchsize.
[22]. P. Goyal et al., Accurate Large Minibatch SGD: Training ImageNet in 1 Hour, 2017, [online] Available: https://arxiv.org/abs/1706.02677.
[23]. Y. You et al., 100-epoch ImageNet Training with AlexNet in 24 Minutes, 2017, [online] Available: https://arxiv.org/pdf/1709.05011.pdf.
[24]. M.D. Zeiler et al., "On rectified linear units for speech processing", Proc. IEEE Int`l Conf. on Acoustics Speech and Signal Processing (ICASSP), pp. 3517-3521, 2013.
[25]. M. Shazeer et al., Outrageously large neural networks: The sparsely-gated mixture-of-experts layer, 2017, [online] Available: https://arxiv.org/abs/1701.06538.
[26]. P. Warden, What I`ve learned about neural network quantization, 2017, [online] Available: https://petewarden.com/2017106122/what-ive-learned-about-neural-network-quantization/.
[27]. G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, 2015, [online] Available: https://arxiv.org/abs/1503.02531.
[28]. A. Graves, G. Wayne, I. Danihelka, Neural Turing machines, 2014, [online] Available: https://arxiv.org/abs/1410.5401.
[29]. A. Graves et al., "Hybrid computing using a neural network with dynamic external memory", Nature, vol. 538, pp. 471-476, 2016.
[30]. J. Weston, S. Chopra, A. Bordes, Memory Networks, 2014, [online] Available: https://arxiv.org/abs/1410.3916.
[31]. D. Bahdanau, K. Cho, Y. Bengio, Neural Machine Translation by Jointly Learning to Align and Translate, 2014, [online] Available: https://arxiv.org/abs/1409.0473.
[32]. B. Zoph, Q. Le, Neural Architecture Search with Reinforcement Learning, 2016, [online] Available: https://arxiv.org/abs/1611.01578.
[33]. J. Hennessy, D.A. Patterson, Computer Architecture: a Quantitative Approach, Elsevier, 2018, [online] Available: www.elsevier.com/books/computer-architecture/patterson/978-0-12-811905-1.