Open Access   Article Go Back

A Review of Text Summarization using Gated Neural Networks

Touseef Iqbal1 , Abhishek Singh Sambyal2 , Devanand 3

Section:Review Paper, Product Type: Journal Paper
Volume-06 , Issue-03 , Page no. 51-55, Apr-2018

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v6si3.5155

Online published on Apr 30, 2018

Copyright © Touseef Iqbal, Abhishek Singh Sambyal, Devanand . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Touseef Iqbal, Abhishek Singh Sambyal, Devanand, “A Review of Text Summarization using Gated Neural Networks,” International Journal of Computer Sciences and Engineering, Vol.06, Issue.03, pp.51-55, 2018.

MLA Style Citation: Touseef Iqbal, Abhishek Singh Sambyal, Devanand "A Review of Text Summarization using Gated Neural Networks." International Journal of Computer Sciences and Engineering 06.03 (2018): 51-55.

APA Style Citation: Touseef Iqbal, Abhishek Singh Sambyal, Devanand, (2018). A Review of Text Summarization using Gated Neural Networks. International Journal of Computer Sciences and Engineering, 06(03), 51-55.

BibTex Style Citation:
@article{Iqbal_2018,
author = {Touseef Iqbal, Abhishek Singh Sambyal, Devanand},
title = {A Review of Text Summarization using Gated Neural Networks},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {4 2018},
volume = {06},
Issue = {03},
month = {4},
year = {2018},
issn = {2347-2693},
pages = {51-55},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=317},
doi = {https://doi.org/10.26438/ijcse/v6i3.5155}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i3.5155}
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=317
TI - A Review of Text Summarization using Gated Neural Networks
T2 - International Journal of Computer Sciences and Engineering
AU - Touseef Iqbal, Abhishek Singh Sambyal, Devanand
PY - 2018
DA - 2018/04/30
PB - IJCSE, Indore, INDIA
SP - 51-55
IS - 03
VL - 06
SN - 2347-2693
ER -

           

Abstract

There is an enormous amount of information available in the form of documents, articles, links, webpages, etc., which can`t be read completely until effectively summarized. Different procedures are effectively used to separate the imperative information from data to produce summary. This paper gives a brief description of text summarization and deep learning approach called Recurrent Neural Networks (RNNs). Recent advances in Deep RNN methods like Sequence to Sequence, Generative Adversarial Networks, etc. show remarkable results for text summarization. Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks are discussed to overcome the problem of Vanishing or Exploding Gradient.

Key-Words / Index Term

RNN, Sequence to Sequence, Vanishing gradient, LSTM, GRU, Deep Recurrent Generative Decoder, GANs

References

[1] D.K. Gaikwad, C.N. Mahender, “A Review Paper on Text Summarization”, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 5, Issue. 3, 2016.
[2] R. Nallapati, F. Zhai and B.Zhou, “SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents”, In AAAI pp. 3075-3081, February 2017.
[3] V. Gupta and G.S. Lehal, “A survey of text summarization extractive techniques”, Journal of emerging technologies in web intelligence, 2(3), pp.258-268, 2010.
[4] M. Allahyari, S. Pouriyeh, M. Assefi, S. Safaei, E.D.Trippe, J.B. Gutierrez and K. Kochut, “Text summarization techniques: A brief survey”, arXiv preprint arXiv:1707.02268, 2017.
[5] D. Das and A.F. Martins, “A survey on automatic text summarization”, Literature Survey for the Language and Statistics II course at CMU, 4, pp.192-195, 2007.
[6] A.S. Asa, S. Akter, M.P. Uddin, M.D. Hossain, S.K. Roy and M.I. Afjal, “A Comprehensive Survey on Extractive Text Summarization Techniques”, American Journal of Engineering Research, Vol. 6, Issue-1, pp.226-239, 2017.
[7] N. Ramanujam and M. Kaliappan, “An Automatic Multidocument Text Summarization Approach Based on Nave Bayesian Classifier Using Timestamp Strategy”, The Scientific World Journal, 2016.
[8] A. Agrawal and U. Gupta, “Extraction based approach for text summarization using k-means clustering”, Int. J. Sci. Res. Publ.(IJSRP), 4(11), 2014.
[9] A. Nenkova and K. McKeown, “A survey of text summarization techniques”, In Mining text data pp.43-76. Springer, Boston, MA, 2012.
[10] S. Verma and V. Nidhi, “Extractive Summarization using Deep Learning”, arXiv preprint arXiv:1708.04439, 2017.
[11] M. Gambhir and V. Gupta, “Recent automatic text summarization techniques: a survey. Artificial Intelligence Review”, 47(1), pp.1-66, 2017.
[12] K.M. Tarwani and S. Edem, “Survey on Recurrent Neural Network in Natural Language Processing,” International Journal on Emerging Trends in Technology, Vol. 48, 2017
[13] Z.C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning”, arXiv preprint arXiv:1506.00019, 2015.
[14] Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. “Speech recognition with deep recurrent neural networks”, In Acoustics, speech and signal processing (icassp), ieee international conference on, pp.6645-6649. IEEE, 2013.
[15] V. Khomenko, O. Shyshkov, O. Radyvonenko and K. Bokhan, “Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization. In Data Stream Mining & Processing (DSMP)”, IEEE First International Conference on pp.100-103. IEEE, August, 2016.
[16] Yu, Hujia, Chang Yue and Chao Wang, “News Article Summarization with Attention-based Deep Recurrent Neural Networks” Stanford Natural Language Processing Group, Stanford University, pp.2746634, 2016.
[17 Mikolov, Tomáš, Martin Karafiát, Lukáš Burget, Jan Černocký and Sanjeev Khudanpur, “Recurrent neural network based language model", In Eleventh Annual Conference of the International Speech Communication Association, 2010.
[18] Y. Hu, A. Huber, J. Anumula and S.C. Liu, “Overcoming the vanishing gradient problem in plain recurrent networks”, arXiv preprint arXiv:1801.06105, 2018.
[19] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling”, arXiv preprint arXiv:1412.3555, 2014.
[20] H. Sak, A. Senior and F. Beaufays, “Long short-term memory recurrent neural network architectures for large scale acoustic modeling”, In Fifteenth annual conference of the international speech communication association, 2014.
[21] Sak, Haşim, Andrew Senior, and Françoise Beaufays, “Long short-term memory recurrent neural network architectures for large scale acoustic modeling”, In Fifteenth annual conference of the international speech communication association, 2014.
[22] R. Dey and F.M. Salem, “Gate-variants of gated recurrent unit (GRU) neural networks”, arXiv preprint arXiv, 2017.
[23] S. Hochreiter and J. Schmidhuber, “Long short-term memory”, Neural computation, 9(8), pp.1735-1780, 1997.
[24] C.S. Saranyamol and L. Sindhu, “A survey on automatic text summarization”, Int. J. Comput. Sci. Inf. Technol, 5(6), pp.7889-7893.s, 2014.
[25] A. Graves, M. Liwicki, S. Fernndez, R. Bertolami, H. Bunke and J. Schmidhuber, “A novel connectionist system for unconstrained handwriting recognition”, IEEE transactions on pattern analysis and machine intelligence, 31(5), pp.855-868, 2009.
[26] D.P. Kingma and M. Welling, “Auto-encoding variational bayes”, arXiv preprint arXiv:1312.6114, 2013.
[27] P. Li, W. Lam, L. Bing and Z. Wang, “Deep Recurrent Generative Decoder for Abstractive Text Summarization”, arXiv preprint arXiv:1708.00625, 2017.
[28] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets. In Advances in neural information processing systems”, pp.2672-2680, 2014.
[29] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems”, pp.2172-2180, 2016.
[30] L. Liu, Y. Lu, M. Yang, Q. Qu, J. Zhu and H. Li, “Generative Adversarial Network for Abstractive Text Summarization”, arXiv preprint arXiv:1711.09357, 2017.