This article may be confusing or unclear to readers. In particular, this article does not provide an exhaustive presentation and explanations of this undoubtedly important topic. It may also be difficult to follow for beginners. (October 2017) |
Long short-term memory (LSTM) units (or blocks) are a building unit for layers of a recurrent neural network (RNN). A RNN composed of LSTM units is often called an LSTM network. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell is responsible for "remembering" values over arbitrary time intervals; hence the word "memory" in LSTM. Each of the three gates can be thought of as a "conventional" artificial neuron, as in a multi-layer (or feedforward) neural network: that is, they compute an activation (using an activation function) of a weighted sum. Intuitively, they can be thought as regulators of the flow of values that goes through the connections of the LSTM; hence the denotation "gate". There are connections between these gates and the cell.
The expression long short-term refers to the fact that LSTM is a model for the short-term memory which can last for a long period of time. An LSTM is well-suited to classify, process and predict time series given time lags of unknown size and duration between important events. LSTMs were developed to deal with the exploding and vanishing gradient problem when training traditional RNNs. Relative insensitivity to gap length gives an advantage to LSTM over alternative RNNs, hidden Markov models and other sequence learning methods in numerous applications [citation needed].
History
editLSTM was proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber[1] and improved in 2000 by Felix Gers' team.[2]
Among other successes, LSTM achieved record results in natural language text compression,[3] unsegmented connected handwriting recognition[4] and won the ICDAR handwriting competition (2009). LSTM networks were a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset (2013).[5]
As of 2016, major technology companies including Google, Apple, and Microsoft were using LSTM as fundamental components in new products.[6] For example, Google used LSTM for speech recognition on the smartphone,[7][8] for the smart assistant Allo[9] and for Google Translate.[10][11] Apple uses LSTM for the "Quicktype" function on the iPhone[12][13] and for Siri.[14] Amazon uses LSTM for Amazon Alexa.[15]
In 2017 Microsoft reported reaching 95.1% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[16]
Architectures
editThere are several architectures of LSTM units. A common architecture is composed of a memory cell, an input gate, an output gate and a forget gate.
An LSTM (memory) cell stores a value (or state), for either long or short time periods. This is achieved by using an identity (or no) activation function for the memory cell. In this way, when an LSTM network (that is an RNN composed of LSTM units) is trained with backpropagation through time, the gradient does not tend to vanish.
The LSTM gates compute an activation, often using the logistic function. Intuitively, the input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit.
There are connections into and out of these gates. A few connections are recurrent. The weights of these connections, which need to be learned during training, of an LSTM unit are used to direct the operation of the gates. Each of the gates has its own parameters, that is weights and biases, from possibly other units outside the LSTM unit.
Variants
editIn the equations below, each variable in lowercase italics represents a vector. Matrices and collect respectively the weights of the input and recurrent connections, where can either be the input gate , output gate , the forget gate or the memory cell , depending on the activation being calculated.
LSTM with a forget gate
editCompact form of the equations for the forward pass of an LSTM unit with a forget gate.[1][2]
where the initial values are and and the operator denotes the Hadamard product (entry-wise product). The subscripts refer to the time step.
Variables
edit- : input vector to the LSTM unit
- : forget gate's activation vector
- : input gate's activation vector
- : output gate's activation vector
- : output vector of the LSTM unit
- : cell state vector
- , and : weight matrices and bias vector parameters which need to be learned during training
where the superscripts and refer to the number of input features and number of hidden units, respectively.
- : sigmoid function.
- : hyperbolic tangent function.
- : hyperbolic tangent function or, as the peephole LSTM paper[which?] suggests, .[17][18]
Peephole LSTM
editThe figure on the right is a graphical representation of a LSTM unit with peephole connections (i.e. a peephole LSTM).[17][18] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[20] is not used, is used instead in most places.
Convolutional LSTM
editConvolutional LSTM.[21] denotes the convolution operator.
Training
editTo minimize LSTM's total error on a set of training sequences, iterative gradient descent such as backpropagation through time can be used to change each weight in proportion to its derivative with respect to the error. A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1.[22][23] With LSTM units, however, when error values are back-propagated from the output, the error remains in the unit's memory. This "error carousel" continuously feeds error back to each of the gates until they learn to cut off the value. Thus, regular backpropagation is effective at training an LSTM unit to remember values for long durations.
LSTM can also be trained by a combination of artificial evolution for weights to the hidden units, and pseudo-inverse or support vector machines for weights to the output units.[24] In reinforcement learning applications LSTM can be trained by policy gradient methods, evolution strategies or genetic algorithms[citation needed].
CTC score function
editMany applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[26] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.
Applications
editApplications of LSTM include:
- Robot control[27]
- Time series prediction[28]
- Speech recognition[29][30][31]
- Rhythm learning[18]
- Music composition[32]
- Grammar learning[33][17][34]
- Handwriting recognition[35][36]
- Human action recognition[37]
- Sign Language Translation[38]
- Protein Homology Detection[39]
- Predicting subcellular localization of proteins[40]
- Time series anomaly detection[41]
- Several prediction tasks in the area of business process management[42]
- Prediction in medical care pathways[43]
- Semantic parsing[44]
LSTM has Turing completeness in the sense that given enough network units it can compute any result that a conventional computer can compute, provided it has the proper weight matrix, which may be viewed as its program[citation needed][further explanation needed].
See also
editReferences
edit- ^ a b Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
- ^ a b Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (10): 2451–2471. CiteSeerX 10.1.1.55.5709. doi:10.1162/089976600300015015. PMID 11032042. S2CID 11598600.
- ^ "The Large Text Compression Benchmark". Retrieved 2017-01-13.
- ^ Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868. doi:10.1109/tpami.2008.137. ISSN 0162-8828. PMID 19299860. S2CID 14635907.
- ^ Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013-03-22). "Speech Recognition with Deep Recurrent Neural Networks". arXiv:1303.5778 [cs.NE].
- ^ "With QuickType, Apple wants to do more than guess your next text. It wants to give you an AI". WIRED. Retrieved 2016-06-16.
- ^ Beaufays, Françoise (August 11, 2015). "The neural networks behind Google Voice transcription". Research Blog. Retrieved 2017-06-27.
- ^ Sak, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise; Schalkwyk, Johan (September 24, 2015). "Google voice search: faster and more accurate". Research Blog. Retrieved 2017-06-27.
- ^ Khaitan, Pranav (May 18, 2016). "Chat Smarter with Allo". Research Blog. Retrieved 2017-06-27.
- ^ Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin (2016-09-26). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL].
- ^ Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever | WIRED". www.wired.com. Retrieved 2017-06-27.
- ^ Efrati, Amir (June 13, 2016). "Apple's Machines Can Learn Too". The Information. Retrieved 2017-06-27.
- ^ Ranger, Steve (June 14, 2016). "iPhone, AI and big data: Here's how Apple plans to protect your privacy | ZDNet". ZDNet. Retrieved 2017-06-27.
- ^ Smith, Chris (2016-06-13). "iOS 10: Siri now works in third-party apps, comes with extra AI features". BGR. Retrieved 2017-06-27.
- ^ Vogels, Werner (30 November 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS. - All Things Distributed". www.allthingsdistributed.com. Retrieved 2017-06-27.
- ^ Haridy, Rich (August 21, 2017). "Microsoft's speech recognition system is now as good as a human". newatlas.com. Retrieved 2017-08-27.
- ^ a b c Gers, F. A.; Schmidhuber, J. (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages" (PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. PMID 18249962.
- ^ a b c Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). "Learning precise timing with LSTM recurrent networks" (PDF). Journal of Machine Learning Research. 3: 115–143.
- ^ Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; Bas R. Steunebrink; Jürgen Schmidhuber (2015). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems. 28 (10): 2222–2232. arXiv:1503.04069. doi:10.1109/TNNLS.2016.2582924. PMID 27411231. S2CID 3356463.
- ^ Gers, F. A.; Schmidhuber, E. (November 2001). "LSTM recurrent networks learn simple context-free and context-sensitive languages" (PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. ISSN 1045-9227. PMID 18249962.
- ^ Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-kin Wong; Wang-chun Woo (2015). "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting". Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. arXiv:1506.04214.
- ^ S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut f. Informatik, Technische Univ. Munich, 1991.
- ^ Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)". In Kremer and, S. C.; Kolen, J. F. (eds.). A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press. Retrieved 2017-06-27.
{{cite book}}
:|website=
ignored (help) - ^ Schmidhuber, J.; Wierstra, D.; Gagliolo, M.; Gomez, F. (2007). "Training Recurrent Networks by Evolino". Neural Computation. 19 (3): 757–779. doi:10.1162/neco.2007.19.3.757. PMID 17298232. S2CID 11745761.
- ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in structured domains with hierarchical recurrent neural networks". Proc. 20th Int. Joint Conf. On Artificial In℡ligence, Ijcai 2007: 774–779. CiteSeerX 10.1.1.79.1887.
- ^ Graves, Alex; Fernández, Santiago; Gomez, Faustino (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX 10.1.1.75.6306.
- ^ Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). "A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks". 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems: 543–548. doi:10.1109/IROS.2006.282190. ISBN 1-4244-0258-1. S2CID 12284900.
- ^ Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). "Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning". Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858.
- ^ Graves, A.; Schmidhuber, J. (2005). "Framewise phoneme classification with bidirectional LSTM and other neural network architectures". Neural Networks. 18 (5–6): 602–610. doi:10.1016/j.neunet.2005.06.042. PMID 16112549.
- ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "An Application of Recurrent Neural Networks to Discriminative Keyword Spotting". Proceedings of the 17th International Conference on Artificial Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag: 220–229. ISBN 978-3540746935.
- ^ Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). "Speech Recognition with Deep Recurrent Neural Networks". Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on: 6645–6649. arXiv:1303.5778. doi:10.1109/ICASSP.2013.6638947. ISBN 978-1-4799-0356-6. S2CID 206741496.
- ^ Eck, Douglas; Schmidhuber, Jürgen (2002-08-28). "Learning the Long-Term Structure of the Blues". Artificial Neural Networks — ICANN 2002. Lecture Notes in Computer Science. 2415. Springer, Berlin, Heidelberg: 284–289. doi:10.1007/3-540-46084-5_47. ISBN 3540460845.
- ^ Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). "Learning nonregular languages: A comparison of simple recurrent networks and LSTM". Neural Computation. 14 (9): 2039–2041. doi:10.1162/089976602320263980. PMID 12184841. S2CID 30459046.
- ^ Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets". Neural Networks. 16 (2): 241–250. doi:10.1016/s0893-6080(02)00219-8. PMID 12628609.
- ^ A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
- ^ Graves, Alex; Fernández, Santiago; Liwicki, Marcus; Bunke, Horst; Schmidhuber, Jürgen (2007). "Unconstrained Online Handwriting Recognition with Recurrent Neural Networks". Proceedings of the 20th International Conference on Neural Information Processing Systems. NIPS'07. USA: Curran Associates Inc.: 577–584. ISBN 9781605603520.
- ^ M. Baccouche, F. Mamalet, C Wolf, C. Garcia, A. Baskurt. Sequential Deep Learning for Human Action Recognition. 2nd International Workshop on Human Behavior Understanding (HBU), A.A. Salah, B. Lepri ed. Amsterdam, Netherlands. pp. 29–39. Lecture Notes in Computer Science 7065. Springer. 2011
- ^ Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2018-01-30). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv:1801.10111.
- ^ Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). "Fast model-based protein homology detection without alignment". Bioinformatics. 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247. PMID 17488755.
- ^ Thireou, T.; Reczko, M. (2007). "Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins". IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB). 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID 17666763. S2CID 11787259.
- ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short Term Memory Networks for Anomaly Detection in Time Series" (PDF). European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015.
- ^ Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). "Predictive Business Process Monitoring with LSTM neural networks". Proceedings of the International Conference on Advanced Information Systems Engineering (CAiSE). Lecture Notes in Computer Science. 10253: 477–492. arXiv:1612.02130. doi:10.1007/978-3-319-59536-8_30. ISBN 978-3-319-59535-1. S2CID 2192354.
- ^ Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks". Proceedings of the 1st Machine Learning for Healthcare Conference. 56: 301–318. PMC 5341604. PMID 28286600.
- ^ Jia, Robin; Liang, Percy (2016-06-11). "Data Recombination for Neural Semantic Parsing". arXiv:1606.03622 [cs].
External links
edit- Recurrent Neural Networks with over 30 LSTM papers by Jürgen Schmidhuber's group at IDSIA
- Gers PhD thesis on LSTM networks.
- Fraud detection paper with two chapters devoted to explaining recurrent neural networks, especially LSTM.
- Paper on a high-performing extension of LSTM that has been simplified to a single node type and can train arbitrary architectures.
- Tutorial: How to implement LSTM in Python with Theano
- A Beginner’s Guide to Recurrent Networks and LSTMs