Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique
Research Paper | Journal Paper
Vol.06 , Issue.07 , pp.1-7, Sep-2018
Abstract
Human action recognition (HAR) from videos is a significant and has more research focus in the domain of Computer vision. The purpose of human action recognition in videos is to detect and recognize the human actions from the sequence of frames. Human action recognition undertakes many difficulties such as differences in human shape, cluttered background, moving cameras, illumination conditions, motion, occlusion, and viewpoint variations. In previously, local features or deep learned features are used to recognize the action. In the proposed work, both the features are used to recognize action and for analysis. From sequences of frames background is subtracted using Multi-frame averaging method. Two kinds of feature extraction are done. Shape based feature extraction, Optical flow feature extraction are some of the hand-crafted features performed and classification is done using HMM. The other one is deep learned features. Convolutional Neural Network extracts the features from frames in each layer. It extracts the features such as line, edge, color, texture and Classification is done using SVM. For human action recognition, hand-crafted features attain good result but it fails on large set of data. Deep learned features such as CNN have been used for large dataset and good result is obtained on recognition. To improve the human action recognition result, CNN is proposed. We compared both the approaches CNN and HMM and the results were analyzed. CNN results better accuracy while comparing with HMM.
Key-Words / Index Term
Background Subtraction, Convolutional Neural Network, Canny Edge Detection, Optical Flow, Hidden Markov Model
References
[1] M. Ahmad and Seong-Whan Lee. “HMM-based Human Action Recognition Using Multiview Image Sequences”. IEEE 18th International Conference on Pattern Recognition, vol. 4, pp. 874-879, 2006.
[2] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. “Large-scale Video Classification with Convolutional Neural Networks”. IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2014.
[3] . Corinna Cortes and Vladimir Vapnik. “Support-Vector Networks”. Machine Learning, vol. 20, pp. 273–297, 1995.
[4] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie. “Behavior recognition via sparse spatio-temporal features”. IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72, 2006.
[5] Figueroa-Angulo J., Savage J., Bribiesca E., Escalante B. and Scucar L. “Compound Hidden Markov Model for Activity Labelling”. IEEE International Journal of Intelligence Science, vol. 5, pp. 177-195, 2015.
[6] Fu Jie Huang and Yann LeCun. “Large-scale Learning with SVM and Convolutional Nets for Generic Object Categorization”. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2006.
[7] Imran N. Junejo, Khurrum Nazir Junejo and Zaher Al Aghbari. “Silhouette-based human
[8] action recognition using SAX-Shapes”.Springer, vol. 30, pp. 259-269, 2014.
[9] Jie Yang, Jian Cheng and Hanqing Lu. “Human Activity Recognition based on the Blob Features”. IEEE International Conference on Multimedia and Expo, pp. 358-361, 2009.
[10] Limin Wang, Yu Qiao, and Xiaoou Tang. “Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors”. IEEE Conference on Computer Vision and Pattern Recognition, pp. 7–12, 2015.
[11] Maheshkumar H. Kolekar and Deba Prasad Dash. “Hidden Markov Model based human activity recognition using shape and optical flow based features”. IEEE Region 10 Conference (TENCON), pp. 393-396, 2016.
[12] NavidNourani-Vatani, Paulo V. K. Borges and Jonathan M. Roberts. “A Study of Feature Extraction Algorithms for Optical Flow Tracking . Australasian Conference on Robotics and Automation”. Australian Robotics and Automation Association, 2012.
[13] PalwashaAfsar and Paulo Cortez. “Automatic Human Action Recognition from Video Using Hidden Markov Model”. IEEE 18th International Conference on Computational Science and Engineering, pp. 105-109, 2015.
[14] Sheng Yu, Yun Cheng, Songzhi Su, Guorong Cai, and Shaozi Li. “Stratified pooling based deep convolutional neural networks for human action recognition”. Multimedia Tools and Applications, vol. 76, pp. 13367–13382, 2016.
[15] Xiaojiang Peng, LiminWang, XingxingWang, and Yu Qiao. “Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice”, Elsevier, vol. 150, 2016.
[16] ] Xin Yuan and Xubo Yang. “A Robust Human Action Recognition System Using Single Camera”. IEEE International Conference on Computational Intelligence and Software Engineering, pp. 1-4, 2009.
[17] Md. Zia Uddin, Nguyen Duc Than and Tae-Seong Kim. Human. “ActivityRecognition via 3-D joint angle features and Hidden Markov models”. IEEE International Conference on Image Processing Electronics and Telecommunications Research Institute(ETRI) Journal, pp. 713-716, 2010.
[18] Zhenzhong Lan, Shoou-I Yu, Ming Lin, Bhiksha Raj, and Alexander G. Hauptmann. “Local Handcrafted Features Are Convolutional Neural Networks”. International Conference on Learning Representations, pp 43–56, 2016.
[19] http://crcv.ucf.edu/data/UCF50.php
Citation
G. Augusta Kani, P. Geetha, A. Gomathi, "Human Activity Recognition using Deep with Gradient Fused Handcrafted Features and categorization based on Machine Learning Technique", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.1-7, 2018.
A Review on Texture Descriptors in 2D Ear Recognition
Review Paper | Journal Paper
Vol.06 , Issue.07 , pp.8-12, Sep-2018
Abstract
Ear recognition is an active area of research and automatic ear recognition is one of the challenging areas in biometric and forensic domains. Human ear contains large amount of unique features for recognition of an individual. There are different approaches and descriptors that achieve relatively good results in ear biometric recognition. Studies show that there is poor recognition performance in case of occlusion, illumination variation and pose variation. This paper presents an overview of different local texture descriptors in the field of automatic ear recognition. The local descriptors which calculate features from small local patches have proven to be more effective in real world situations compared to the global descriptors which extract features from whole image.
Key-Words / Index Term
Ear, Biometric, Texture Descriptors, Feature Extraction, LBP, GLCM, LPQ
References
[1] Ear biometrics: A survey of detection, Emersic, Z., Struc, V., & Peer, P. (2017). Ear recognition: More than a survey. Neurocomputing, 255, 26-39.
[2] A. Iannarelli. Ear identification. Forensic Identification Series. Paramont publishing company, Fremont,California,1989.
[3] A. Kumar, C. Wu, Automated human identification using ear imaging, Pattern Recognition(2011) ,doi:10.1016/j.patcog.2011.06.005 .
[4] Choras M. (2004) Human Ear Identification Based on Image Analysis. In: Rutkowski L., Siekmann J.H., Tadeusiewicz R., Zadeh L.A. (eds) Artificial Intelligence and Soft Computing - ICAISC 2004. ICAISC 2004. Lecture Notes in Computer Science, vol 3070. Springer, Berlin, Heidelberg..
[5] Bertillon A. Identification Anthropometrique: Instructions Signaletique; 1885.
[6] M. Burge, W. Burger, Biometrics: Personal Identification in Networked Society, Springer US, Boston, MA, 1996, Ch. Ear Biometrics, pp. 273–285.
[7] B. Moreno, A. Sánchez, J. F. Vélez, On the use of outer ear images for personal identification in security applications, in: Proceedings of the International Carnahan Conference on Security Technology, IEEE, 1999, pp. 469–476.
[8] Z. Mu, L. Yuan, Z. Xu, D. Xi, S. Qi, Shape and structural feature based ear recognition, in: Advances in biometric person authentication, Springer, 2004, pp. 663–670.
[9] M. Choras, R. S. Choras, Geometrical algorithms of ear contour shape representation and feature extraction, in: Proceedings of the International Conference on Intelligent Systems Design and Applications, IEEE, 2006, pp. 451–456.
[10] D. J. Hurley, M. S. Nixon, J. N. Carter, Automatic ear recognition by force field transformations, in: Proceedings of the Colloquium on Visual Biometrics, IET, 2000, pp. 7–1.
[11] B. Victor, K. Bowyer, S. Sarkar, An evaluation of face and ear biometrics, in: Proceedings of the International Conference on Pattern Recognition, Vol. 1, IEEE, 2002, pp. 429–432.
[12] K. Chang, K. W. Bowyer, S. Sarkar, B. Victor, Comparison and combination of ear and face images in appearance-based biometrics, Transactions on Pattern Analysis and Machine Intelligence 25 (9) (2003) 1160–1165.
[13] H.-J. Zhang, Z.-C. Mu, W. Qu, L.-M. Liu, C.-Y. Zhang, A novel approach for ear recognition based on ICA and RBF network, in: Proceedings of the International Conference on Machine Learning and Cybernetics, Vol. 7, IEEE, 2005, pp. 4511–4515.
[14] L. Nanni, A. Lumini, Fusion of color spaces for ear authentication,Pattern Recognition 42 (9) (2009) 1906–1913.
[15] A. Pflug, C. Busch, A. Ross, 2D ear classification based on unsupervised clustering, in: Proceedings of the International Joint Conference on Biometrics, IEEE, 2014, pp. 1–8.
[16] A. Benzaoui, N. Hezil, A. Boukrouche, Identity recognition based on the external shape of the human ear, in: Proceedings of the International Conference on Applied Research in Computer Science and Engineering, IEEE, 2015, pp. 1–5.
[17] A. Pflug, P. N. Paul, C. Busch, A comparative study on texture and surface descriptors for ear biometrics, in: Proceedings of the International Carnahan Conference on Security Technology, IEEE, 2014, pp. 1–6.
[18] L. Jacob, G. Raju, Advances in Signal Processing and Intelligent Recognition Systems, Springer International Publishing,Cham, 2014, Ch. Ear Recognition Using Texture Features – A Novel Approach, pp. 1–12.
[19] Ojala, T. and Pietikäinen, M. (1999), Unsupervised Texture Segmentation Using Feature Distributions. Pattern Recognition 32:477-486.
[20] V. Ojansivu and J. Heikkil, “Blur insensitive texture classification using local phase quantization,” in Proc. 3rd Int. Conf. on Image and SignalProcessing (ICSIP), pp. 236–243, Springer–Verlag, Berlin, Heidelberg(2008).
[21] J. Kannala and E. Rahtu, “BSIF: binarized statistical image features,” inProc. IEEE Int. Conf. on Pattern Recognition (ICPR), pp. 1363–1366,IEEE, Tsukuba, Japan (2012).
[22] T.-S. Chan, A. Kumar, Reliable ear identification using 2-D quadrature filters, Pattern Recognition Letters 33 (14) (2012)1870–1881.
[23] A. Kumar, T.-S. T. Chan, Robust ear identification using sparse representation of local texture descriptors, Pattern recognition 46 (1) (2013) 73–85.
[24] A. Basit, M. Shoaib, A human ear recognition method using nonlinear curvelet feature subspace, International Journal of Computer Mathematics 91 (3) (2014) 616–624.
[25] A. Benzaoui, A. Hadid, A. Boukrouche, Ear biometric recognition using local texture descriptors, Journal of Electronic Imaging 23 (5) (2014) 053008.
[26] A. Benzaoui, A. Kheider, A. Boukrouche, Ear description and recognition using ELBP and wavelets, in: Proceedings of the International Conference on Applied Research in Computer Science and Engineering, 2015, pp. 1–6.
[27] A. Benzaoui, N. Hezil, A. Boukrouche, Identity recognition based on the external shape of the human ear, in: Proceedings of the International Conference on Applied Research in Computer Science and Engineering, IEEE, 2015, pp. 1–5.
[28] H. Bourouba, H. Doghmane, A. Benzaoui, A. H. Boukrouche, Ear recognition based on Multi-bags-of-features histogram,in: Proceedings of the International Conference on Control,Engineering Information Technology, 2015, pp. 1–6.
[29] A. Meraoumia, S. Chitroub, A. Bouridane, An automated ear identification system using Gabor filter responses, in: Proceedings of the International Conference on New Circuits and Systems, IEEE, 2015, pp. 1–4.
[30] Z. Youbi et al., “Human ear recognition based on multi-scale local binary pattern descriptor and KL divergence,” in Proc. of the 39th IEEE Int. Conf. on Telecommunications and Signal Processing (TSP), pp. 685–688 (2016).
[31] Amir Benzaoui, InsafAdjabi, AbdelhaniBoukrouche, “Experiments and improvements of ear recognition based on local texture descriptors,” Opt. Eng. 56(4), 043109 (2017).
Citation
Resmi K R, G Raju, "A Review on Texture Descriptors in 2D Ear Recognition", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.8-12, 2018.
Comparative Evaluation on Supervised Learning Based Age Estimation
Research Paper | Journal Paper
Vol.06 , Issue.07 , pp.13-18, Sep-2018
Abstract
Facial age estimation has got more consideration in the area of computer vision for the past few years. Age estimation is a troublesome task since the distinction between facial pictures with age variations is difficult. In this work, we analyze the problem of age prediction by means of SVR Model and deep learning technique. This paper attempts to find out the efficiency of SVR and Convolution neural network (CNN) on age estimation. Local features such as wrinkles and texture are extracted using Gabor filter, Local Binary Pattern (LBP) and Local Phase Quantization (LPQ). The three features are combined together and the dimension of the feature vector is reduced using Principle Component Analysis. Support Vector Regression (SVR) is utilized to predict the age of an individual. In CNN, the datasets are fine-tuned utilizing the pre-trained VGG-16 model which can group pictures into 1000 categories. The experimental results on the IMDB-WIKI dataset, the ICCV datasets and MORPH 2 dataset shows that CNN outperforms the local feature based SVR model in predicting the age.
Key-Words / Index Term
Convolutional Neural Network, Local binary Pattern, Local Phase Quantization, Gabor Filter, Support Vector Regression
References
[1]. Maximilian, Riesen huber and Tomaso Poggio (1999),”Hierarchical models of object recognition in cortex”, Nature neuroscience, vol. 2, no. 11, pp. 1019–1025.
[2]. XinGeng, Chao Yin, and Zhi-Hua Zhou (2013), ‘’Facial age estimation by learning from label distributions’’, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 35,no.10, pp. 2401–2412.
[3]. Hu Han, Charles Otto, and Anil K Jain (2013),” Age estimation from face images: Human vs. machine performance”, In International Conference on Biometrics (ICB). IEEE, pp.1-8.
[4]. Wen-Bing Horng, Cheng-Ping Lee and Chun-Wen Chen (2001),“Classification of Age Groups Based on Facial Features”, Tamkang Journal of Science and Engineering, vol.4, no.3, pp.183-192.
[5]. J. Suo, Min Feng, S. Zhu, S. Shan, X. Chen (2007), “A multi-resolution dynamic model for face aging simulation”, Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 1-8.
[6]. Z. J. Xu, H.Chen, and S. C. Zhu (2005),“A high resolution grammatical model for face representation and sketching”. IEEE CVPR, pp: 470-477.
[7]. J. Hayashi, M. Yasumoto, H. Ito, Y. Niwa, and H. Koshimizu (2002), ‘’Age and Gender Estimation from Facial Image Processing’’, the41st SICE Annual Conference, vol. 1, pp. 13 -18, Aug.
[8]. K. Ricanek , Y. Wang , C. Chen , S. J. Simmons (2009) “Generalized multi-ethnic face age-estimation”, in Biometrics: Theory, Applications, and Systems, 2009. BTAS’09. IEEE 3rd International Conference on. IEEE, pp. 1-6.
[9]. Wen – bingHorng, cheng-Ping Lee and chun-Wen chen (2001),”classification of age groups based on facial feature”, Journal of Science and Engineering, vol. 4, no.3, pp.183-192.
[10]. Y. Kwon and N. Da Vitoria Lobo (1999), “Age classification from facial images” Computer vision and image understanding, vol. 74, no. 1, pp.1–21.
[11]. A. Lanitis, C. Draganova, and C. Christodoulou (2004), “Comparing different classifiers for automatic age estimation”. In Proceedings of. IEEE Transactions on SMC-B, vol. 34, no.1, pp. 621-628.
[12]. A. Lanitis, C. Taylor, T. Cootes (2002), “Toward Automatic Simulation of Aging Effects on Face Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 442-455.
Citation
A. Annie Micheal, P. Geetha , A. Saranya, "Comparative Evaluation on Supervised Learning Based Age Estimation", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.13-18, 2018.
A Review on Deep Learning in Robotics
Review Paper | Journal Paper
Vol.06 , Issue.07 , pp.19-25, Sep-2018
Abstract
During the last few decades, there has been a rush in research in the area of deep learning. In this paper we have made a review on the limitations of deep learning in physical robotic systems, using currently available examples. It is mainly focused on the recent advances made in robotics community and application of deep learning in robotics.
Key-Words / Index Term
Deep neural networks; artificial intelligence; human-robot interaction
References
[1] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444.
[2] Jordan MI, Mitchell TM. Machine learning: arends, perspectives, and prospects. Science. 2015;349(6245):255-260.
[3] Böhmer W, Springenberg JT, Boedecker J, et al. Autonomous learning of state representations for control: an emerging field aims to autonomously learn state representations for reinforcement learning agents from their real-world sensor observations. KI-Künstliche Intelligenz. 2015;29(4):353-362.
[4] Stigler SM. Gauss and the invention of least squares. Ann of Statistics. 1981;9(3):465-474.
[5] Haykin S. Neural networks: a comprehensive foundation. 2nd ed. Upper Saddle River, New Jersey: Prentice Hall; 2004.
[6] Bryson AE, Denham WF, Dreyfus SE. Optimal programming problems with inequality constraints. AIAA Journal. 1963;1(11):2544-2550.
[7] Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back propagating errors. Nature. 1986;323:533-536.
[8] Werbos P. Beyond regression: new tools for prediction and analysis in the behavioral sciences. [Ph.D. dissertation]. Dept. Statistics, Harvard Univ.; 1974.
[9] Cybenko G. Approximation by superpositions of a sigmoidal function. Math of Control, Signals and Sys. 1989;2(4):303-314.
[10] Hochreiter S. Untersuchungen zu dynamischen neuronalen netzen. [Master`s thesis]. Institut Fur Informatik, Technische Universitat; 1991.
[11] Hochreiter S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. of Uncertainty, Fuzziness and Knowledge-Based Syst. 1998;6(2).
[12] Miyamoto H, Kawato M, Setoyama T, & Suzuki R. Feedback-error-learning neural network for trajectory control of a robotic manipulator. Neural Networks 1998;1(3):251-265.
[13] Lewis FW, Jagannathan S, & Yesildirak A. (1998). Neural network control of robot manipulators and non-linear systems. CRC Press.
[14] Miller WT, Werbos PJ, & Sutton RS. (1995). Neural networks for control. MIT Press.
[15] Lin CT., & Lee CSG. Neural-network-based fuzzy logic control and decision system. IEEE Transactions on Computers. 1991;40(12):1320-1336.
[16] Pomerleau DA. (1989). ALVINN, an autonomous land vehicle in a neural network (No. AIP-77). Carnegie Mellon University, Computer Science Department.
[17] Oh K, Jung K. GPU implementation of neural networks. Pattern Recognition. 2004;37(6):1311-1314.
[18] Hinton GE, Osindero S, Teh Y. A fast learning algorithm for deep belief nets. Neural Computation. 2006;18(7):1527-1554.
[19] Dean J, Corrado G, Monga R, et al. Large scale distributed deep networks.Advances in Neural Information Process. Syst. 25; 2012.
[20] Tani J, Ito M, & Sugita Y. Self-organization of distributedly represented multiple behavior schemata in a mirror system: reviews of robot experiments using RNNPB. Neural Networks. 2004;17(8):1273-1289.
[21] Ijspeert AJ. Central pattern generators for locomotion control in animals and robots: a review. Neural Networks. 2008;21(4):642-653.
[22] Gashler M, Martinez T. Temporal nonlinear dimensionality reduction. Neural Networks (IJCNN), 2011 International Joint Conference on; 2011. p. 1959-1966.
[23] Pomerleau DA (2012). Neural network perception for mobile robot guidance (Vol.239). Springer Science & Business Media.
[24] Thrun S. Learning to play the game of chess. Advances in Neural Inform. Process. Syst.: Proc. of the 1994 Conf.
[25] Campbell M, Hoane AJ, Hsu F. Deep blue. Artificial Intelligence. 2002;134(1):57-83.
[26] Pinto N, Cox DD, DiCarlo JJ. Why is real-world visual object recognition hard? PLoS Computational Biology. 2008;4(1).
[27] Schmidhuber J. Deep learning in neural networks: an overview. Neural Networks. 2015;6(1)85-117.
[28] Graves A, Liwicki M, Fernández S, et al. A novel connectionist system for unconstrained handwriting recognition. Pattern Anal and Machine Intell., IEEE trans. on. 2009;31(5):855-868.
[29] Yang M, Ji S, Xu W, et al. Detecting human actions in surveillance videos. TREC Video Retrieval Evaluation Workshop; 2009.
[30] Lin M, Chen Q, Yan S. Network in network. 2013. Available: https://arxiv.org/abs/1312.4400
[31] Ciresan D, Giusti A, Gambardella LM, et al. Deep neural networks segment neuronal membranes in electron microscopy images. Advances in Neural Information Processing Sys 25; 2012.
[32] Roux L, Racoceanu D, Lomenie N, et al. Mitosis detection in breast cancer histological images an ICPR 2012 contest. J Pathol Inform. 2013;4(8).
[33] Cireşan DC, Giusti A, Gambardella LM, et al. Mitosis detection in breast cancer histology images with deep neural networks. In: K. Mori, I. Sakuma, Y. Sato, C. Barillot and N. Navab, editors. Medical Image Computing and Computer- Assisted Intervention–MICCAI 2013. Springer; 2013.
[34] Cireşan D, Meier U, Masci J, et al. A committee of neural networks for traffic sign classification. Neural Networks (IJCNN), 2011 Int. Joint Conf. on; 2011. p. 1918-1921.
[35] Ciresan D, Meier U, Schmidhuber J. Multi-column deep neural networks for image
classification. Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on; 2012. p. 3642-3649.
[36] Dunne RA, & Campbell NA. On the pairing of the softmax activation and crossentropy
penalty functions and the derivation of the softmax activation function.
In Proc. 8th Aust. Conf. on the Neural Networks, Melbourne 1997;181 (Vol.185).
[37] Wilson DR, Martinez TR. The general inefficiency of batch training for gradient descent learning. Neural Networks. 2003;16(10):1429-1451.
[38] Tieleman T, Hinton G. (2012). Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2).
[39] Kingma D, Ba J. (2014). Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[40] Vincent P, Larochelle H, Lajoie I, et al. Stacked denoising dutoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learning Research. 2010;11:3371-3408.
[41] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25; 2012.
[42] LeCun Y, Bengio Y. Convolutional networks for images, speech, and time series. In M. Arbib, editor. The Handbook of Brain Theory and Neural Networks. 2nd edition. Cambridge, MA: MIT Press; 2003.
[43] Werbos PJ. Backpropagation through time: what it does and how to do it. Proc. IEEE. 1990;78(10):1550-1560.
[44] Sjöberg J, Zhang Q, Ljung L, et al. Nonlinear black-box modeling in system identification: a unified overview. Automatica. 1995;31(12):1691-1724.
[45] Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation. 1997;9(8):1735-1780.
[46] Atkeson CG, Santamaria JC. A comparison of direct and model-based reinforcement learning. Robotics and Automation, IEEE Int. Conf. on; Albuquerque, NM. 1997. p. 3557-3564.
[47] Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature. 2015;518:529-533.
[48] A roadmap for US robotics: from internet to robotics, 2016 edition.
[49] World Technology Evaluation Center, Inc. International Assessment of Research and Development in Robotics. Baltimore, MD, USA; 2006.
[50] FY2009-2034 Unmanned systems integrated roadmap. Washington, DC: Department of Defence (US); 2009.
[51] Material Handling Institute. Material handling and logistics U.S. roadmap 2.0. 2017.
[52] DARPA Robotics Challenge [Internet]. [cited 2017 May 20]. Available from: http://www.darpa.mil/program/darpa-robotics-challenge
[53] Punjani AP, Abbeel P. Deep learning helicopter dynamics models. Robotics and Automation (ICRA), 2015 IEEE International Conference on; 2015. p. 3223- 3230.
[54] Neverova N, Wolf C, Taylor GW, et al. Multi-scale deep learning for gesture detection and localization. Computer Vision-ECCV 2014 Workshops; 2014. p. 474-490.
[55] Mariolis I, Peleka G, Kargakos A, et al. Pose and category recognition of highly deformable objects using deep learning. Advanced Robotics (ICAR), 2015 International Conference on; Istanbul. 2015. p. 655-662.
[56] Yang Y, Li Y, Fermüller C, et al. Robot learning manipulation action plans by watching unconstrained videos from the world wide web. 29th AAAI Conference on Artificial Intelligence (AAAI-15); Austin, TX. 2015.
[57] Levine S, Pastor P, Krizhevsky A, et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. 2016. Available: http://arxiv.org/abs/1603.02199
[58]Ouyang W, Wang X. Joint deep learning for pedestrian detection. Computer Vision, 2013 IEEE Int. Conf. on; Sidney, VIC. 2013. p. 2056-2063.
[59] Wu J, Yildirim I, Lim JJ, et al. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. Advances in Neural Information Processing Systems 28; 2015.
[60] Schmitz A, Bansho Y, Noda K, et al. Tactile object recognition using deep learning and dropout. Humanoid Robots, 2014 14th IEEE-RAS Int. Conf. on; 2014. p. 1044-1050.
[61]Polydoros AS, Nalpantidis L, Kruger V. Real-time deep learning of robotic manipulator inverse dynamics. Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on; 2015. p. 3442-3448.
[62]Jain A, Koppula HS, Soh S, et al. Brain4Cars: car that knows before you do via sensory-fusion deep learning architecture. 2016. Available: http://arxiv.org/abs/1601.00740
[63]Lenz I, Knepper R, Saxena A. Deepmpc: learning deep latent features for model predictive control. Robotics: Science and Systems XI; Rome, Italy. 2015.
[64]Zhang T, Kahn G, Levine S, et al. Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search. 2015. Available: http://arxiv.org/abs/1509.06791
[65]Pinto L, Gupta A. Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. 2015. Available: http://arxiv.org/abs/1509.06825
[66]Kappler D, Bohg J, Schaal S. Leveraging big data for grasp planning. 2015 IEEE International Conference on Robotics and Automation (ICRA); Seattle, WA. 2015. p. 4304-4311.
[67] Pratt GA. Is a cambrian explosion coming for robotics? Journal of Economic Perspectives. 2015;29(3):51-60.
[68] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks. 2013. Available: http://arxiv.org/abs/1312.6199
Citation
Shimi P S, Shajan P X, "A Review on Deep Learning in Robotics", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.19-25, 2018.
A Review On Various Steps In Apple Fruit Grading
Review Paper | Journal Paper
Vol.06 , Issue.07 , pp.26-30, Sep-2018
Abstract
Image processing is used to transform images into digital form and carry out some process on it, and get an improved image or obtain some useful details from it . Image processing has a lot of applications especially in agricultural, food industries, quality control and classification of products. Quality control in apple-based industries and marketing plays an important role in producing high quality products. Traditionally, apple quality inspection is performed by human experts. But the accuracy of them is low. To solve this there are different apple grading techniques and each techniques follow the same steps. Apple graded in three or more quality grades .These grades are AAA, AA and A; A, B, C. This review deals with compare different methods used in each step of apple grading and identify the best methods.
Key-Words / Index Term
Image processing, Apple grading, Digital images
References
[1] Dr. Vilas D. Sadegaonkar, Kiran H. Wagh . Improving Quality of Apple Using Computer Vision & Image Processing Based Grading System, International Journal of Science and Research (IJSR),2013
[2] Daniela Eisenstecken , Alessia Panarese , Peter Robatscher , Christian W. Huck , Angelo Zanella and Michael Oberhuber A Near Infrared Spectroscopy (NIRS) and Chemometric Approach to Improve Apple Fruit Quality Management ,Molecules 2015, 20, 13603-13619;
[3] Xiaohong Wu, Bin Wu, Jun Sun1, Min L and Hui Du , Discrimination of Apples Using Near Infrared Spectroscopy and Sorting Discriminant Analysis, International Journal of Food Properties, 19:1016–1028, 2016;
[4] Wenyi Tan, Laijun Sun, Fei Yang2, Wenkai Che, Dandan Ye, Dan Zhang,Borui Zou, The feasibility of early detection and grading of apple bruises using hyperspectral imaging, Journal of Chemometrics. 2018;e3067
[5] Zou XB, Zhao JW, Li Y, Mel H. In-line detection of apple defects using three color cameras system Comp Electron Agric 2010;70(1):12934. 303-32.
[6] Czesaw Puchalski, Jzef Gorzelany, Grzegorz Zagua, Gerald Brusewitz Image analysis for apple defect mdetection Biosystems and Agricultural Engineering.
[7] A. Davenel; CH. Guizard; T. Labarre; F. Sevila Automatic Detection of Surface Defects by Using a Vision System on Fruit. CEMAGREF Division Technologie des Equipments Agricoles et Alimentaires.
[8] Unay D, Gosselin B, Artificial neural network-based segmentation and apple grading by machine vision. In: Proc. IEEE international conference on image processing (ICIP 2005).. II-630-3.
[9] Mohana S.H., Prabhakar C.JStem - Calyx recognition of an apple using shape descriptors Signal Image Processing : An International Journal (SIPIJ) Vol.5, No.6, December 2014) II-630-3.
[10] David W. Penman Determination of stem and calyx location on apples using automatic visual inspection international journal of pharmacy and technology 0975-766X(2016).
[11] V. Leemans , M.-F. Destain A real-time grading method of apples based on features extracted from Defects Journal of Food Engineering 61 (2004) 8389.
[12] I. Paulus; R. De Busscher; E. Schrevens Use of Image Analysis to Investigate Human Quality Classification of Apples . agric. Engng Res. (1997).
[13] I.Kavdr1; D.E. Guyer Comparison of Artificial Neural Networks and Statistical Classifiers in Apple Sorting using Textural Features Biosystems Engineering (2004) 89 (3).
Citation
Anu V Kottath , "A Review On Various Steps In Apple Fruit Grading", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.26-30, 2018.
Heart Disease Detection Using Data Miningtechniques
Review Paper | Journal Paper
Vol.06 , Issue.07 , pp.31-33, Sep-2018
Abstract
Data mining is process to analyses number of data sets and then extracts the meaning of data. It helps to predict the patterns and future trends, allowing business in decision making. Data mining provides methods and techniques for transformation of the data into useful information for decision making. These techniques can make process fast and take less time to predict the heart disease with more accuracy. In this paper we survey different papers in which one or more algorithms of data mining used for the prediction of heart disease. By Applying data mining techniques to heart disease data which requires to be processed, we can get effective results and achieve reliable performance which will help in decision making in healthcare industry. It will help the cardiologist to diagnose the disease in less time and predict probable complications well in advance.
Key-Words / Index Term
Data mining, disease prediction, KNN, Decision tree, SVM
References
[1] Ms. Chaitrali S. Dangare, Dr. Mrs. Sulabha S. Apte, “A data mining approach for prediction of heart disease using neural networks, international journal of computer engineering and technology”,2012
[2] M.A.Nishara Banu and B.Gomathy,” Disease Forecasting System Using Data Mining Methods”, 2014
[3] Aqueel Ahmed, Shaikh Abdul Hannan, “Data Mining Techniques to Find Out Heart Diseases”, International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-1, Issue-4, September 2012.
[4] Ms. Ishtake S.H, Prof. Sanap S.A., “Intelligent Heart Disease Prediction System Using Data Mining Techniques”, International J. of Healthcare & Biomedical Research, 2013
[5] Chitra R and Seenivasagam V, “Review of Heart Disease Prediction System Using Data Mining And Hybrid Intelligent Techniques”, Issn: 2229-6956(Online) Ictact Journal On Soft Computing, July 2013, Volume: 03, Issue: 04, 2013
[6] Nidhi Bhatla and Kiran Jyoti, “An Analysis of Heart Disease Prediction using Different Data Mining Techniques”, International Journal of Engineering Research & Technology (IJERT), ISSN: 2278-0181,Vol. 1 Issue 8, October – 2012
[7] Shadab Adam Pattekari and Asma Parveen, prediction system for heart disease using naïve bayes, International Journal of Advanced Computer and Mathematical Sciences, 2012
[8] Venkatadri.M, Dr. Lokanatha C. Reddy a review on data mining from past to the future. International Journal of Computer Applications, 2011.
[9] Abhishek taneja, Heart Disease Prediction System Using Data Mining Techniques, Oriental Scientific Publishing Co., India, 2013.
[10] Rashedur M. Rahman, Farhana Afroz, Comparison of Various Classification Techniques Using Different Data Mining Tools for Diabetes Diagnosis, Journal of Software Engineering and Applications, 2013
[11] Nidhi Bhatla Kiran Jyoti, An Analysis of Heart Disease Prediction using Different Data Mining Techniques, International Journal of Engineering Research & Technology (IJERT), 2012.
[12] Humar Kahramanli, Novruz Allahverdi, Design of a hybrid system for the diabetes and heart diseases, Elsevier, 2008.
[13] Marcel A.J. van Gerven, Predicting carcinoid heart disease with the noisy-threshold classifier, Elsevier, 2007.
[14] Mohammad Taha Khan, Dr. Shamimul Qamar and Laurent F. Massin, A Prototype of Cancer/Heart Disease Prediction Model Using Data Mining, International Journal of Applied Engineering Research, 2012
[15] M.Akhil jabbar, Dr.Priti Chandra, Dr.B.L Deekshatulu, Heart Disease Prediction System using Associative Classification and Genetic Algorithm, International Conference on Emerging Trends in Electrical, Electronics and Communication Technologies, 2012
[16] National High Blood Pressure Education Program Working Group on High Blood Pressure in Children and Adolescents (2004). The fourth report on the diagnosis, evaluation, and treatment of high blood pressure in children and adolescents; Paediatrics 114:555-76
[17] Dinarević S, Mesihović H, Simeunović S, Zulić I (1994) Dyslipoproteinaemia in Children with Heart Disease. Intercontinental Cardiol 3:126-9.
Citation
Akhila Anikumar, Shajan P X, "Heart Disease Detection Using Data Miningtechniques", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.31-33, 2018.
Human Swarming with Artificial Swarm Intelligence using a hybrid approach
Review Paper | Journal Paper
Vol.06 , Issue.07 , pp.34-37, Sep-2018
Abstract
Swarm Intelligence explores swarms of autonomous robots or simulated agents. Little work, however, has been done on swarms of networked humans. Artificial Swarm Intelligence (ASI) strives to facilitate the emergence of a super-human intellect by connecting groups of human users in closed-loop systems modeled after biological swarms. Early studies have shown that “human swarms” can make more accurate predictions than traditional methods for tapping the wisdom of groups, such as votes and polls. Artificial Swarm Intelligence enables groups to form real-time systems online, connecting as ‘human swarms’ from anywhere in the world. A combination of real-time human input and A.I. algorithms, a Swarm Artificial Swarm Intelligence based system combines the knowledge, wisdom, opinions, and intuitions of live human participants as a unified emergent intelligence that can generate optimized predictions, decisions, insights, and judgments. Simply put, Swarm A.I. technology creates amplified intelligence while keeping humans in the loop.
Key-Words / Index Term
Swarm intelligence, Human Swarming, ASI Algorithms
References
[1] Beni, G., Wang, J. Swarm Intelligence in Cellular Robotic Systems, Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy (1989).
[2] Rosenberg, L.B, “Human Swarms, a real-time paradigm for collective intelligence.” Collective Intelligence 2015, Santa Clara CA.
[3] Rosenberg, L.B., “Human Swarms, a real-time method for collective intelligence.” Proceedings of the European Conference on Artificial Life 2015, pp. 658-659
[4] Seeley, Thomas D., Visscher, P. Kirk. Choosing a home: How the scouts in a honey bee swarm perceive the completion of their group decision making. Behavioral Ecology and Sociobiology 54 (5) 511-520.
[5] Seeley, Thomas D. Honeybee Democracy. Princeton University Press, 2010.
[6] Seeley, Thomas D., et al. "Stop signals provide cross inhibition in collective decision-making by honeybee swarms." Science 335.6064 (2012): 108-111.
[7] Axelrod R, Hamilton WD (1981) The evolution of cooperation. Science 211:1390–1396.
[8] Greene, Joshua (2013). Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Penguin Press.
[9] Rosenberg, L.B, et al. “Swarm Intelligence and Morality of the Hive Mind” Collective Intelligence 2016, Santa Clara CA.
[10] Zhu, f.Yen, et al. “overview of Swarm intelligence” ICCASM, 22-24 Oct. 2010.
[11] Karasi, A., et al. “ Finding safe path and locations in disaster affected area using Swarm Intelligence” International Conference on Emerging Trends in Communication Technologies (ETCT), 2016.
[12] Lev Muchnik, Sinan Aral, Sean J. Taylor. Social Influence Bias: A Randomized Experiment. Science, 9 August 2013: Vol. 341 no. 6146 pp. 647-651.
[13] Rand, D. G., Arbesman, S. & Christakis, N. A. (2011) Dynamic social networks promote cooperation in experiments with humans. Proc. Natl Acad. Sci. USA 108, 19193–19198.
[14] Pinheiro, F. L., Santos, F. C., and Pacheco, J. M. (2012). How selection pressure changes the nature of social dilemmas in structured populations. New J. Phys., 14(7):073035.
[15] Santos, F. C., Pinheiro, F. L., Lenaerts, T., and Pacheco, J. M. (2012). The role of diversity in the evolution of cooperation. J. Theor. Biol., 299:88–96.
[16] Eberhart, Russell, Daniel Palmer, and Marc Kirschenbaum "Beyond computational intelligence: blended intelligence." Swarm/Human Blended Intelligence Workshop (SHBI), 2015. IEEE, 2015.
[17] K.m. Passino, T.F. Seeley, P.K. Visscher, Swarm Cognition in honeybees, Behav. Ecol. Sociobiol. 62, 401 (2008).
[18] J.A.R. Marchall, R. Bogacz, A. Dornhaus, R. Planque, T.Kovacs, N.R. Franks, On optimal decision making in brains and social insect colonies, J.R. Soc Interface 6,1065 (2009).
[19] I.D. Couzin, Collective Cognition in Animal Groups, Trends Cogn. Sci. 13,36 (2008).
[20] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on NeuralNetworks, pp. 1942–1948, December 1995.
[21] Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC ’98), pp. 69–73, May 1998.
[22] Y. Shi and R. C. Eberhart, “Fuzzy adaptive particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 101–106, May 2001.
[23] Varinder Singh et.al. ,"A Effective Decision Making Approach “Human Swarming with Artificial Swarm Intelligence”" International Journal of Advanced Research in Computer Science,Volume 8, No. 4, May 2017 (Special Issue).
Citation
Adhina Raju, Shajan P X, "Human Swarming with Artificial Swarm Intelligence using a hybrid approach", International Journal of Computer Sciences and Engineering, Vol.06, Issue.07, pp.34-37, 2018.