双肺纹理增重是什么意思| 中单是什么| 猪咳嗽用什么药效果好| 舌头不舒服是什么原因引起的| 子宫脱垂有什么症状| 羊蝎子是什么肉| 任性的女孩有什么特点| 狗狗取什么名字| 什么是铅| 肾囊肿挂什么科| pof是什么意思| 人妖是什么| 什么饺子馅最好吃| 垂体瘤是什么| 维生素b12高是什么原因| 头发爱出油什么原因| 女人下面长什么样| 接风吃什么| 手臂有痣代表什么| 人格魅力是什么意思| 七宗罪分别是什么| 奇可以加什么偏旁| 这几天为什么这么热| 脑供血不足吃什么药| 川芎治什么病最好| 尿酸高是什么原因导致的| 小儿积食吃什么药最好| 上眼皮有痣代表什么| 腹泻吃什么药| 肝内脂质沉积是什么意思| 什么是t| 陈皮不能和什么一起吃| 什么是事实婚姻| 猝死是什么意思| 生地是什么| 淋症是什么意思| rash什么意思| 刺猬爱吃什么| 护理是做什么的| 为什么痛风就痛一只脚| 女性白带发黄是什么原因| 气短是什么意思| 感冒头痛吃什么药| 大姨妈吃什么好| 趋利避害是什么意思| 水肿吃什么药消肿最快最有效| 摩羯后面是什么星座| 什么是士官| 肝在五行中属什么| 孕妇吃什么水果好对胎儿好| 摧残是什么意思| 喉咙有白痰是什么原因| 治疗狐臭最好的方法是什么| 桦树茸有什么作用| 衔接是什么意思| 膳是什么意思| 大千是什么意思| 为什么抽烟就想拉屎| 赤茯苓又叫什么| 梦见马是什么预兆| 素色是什么颜色| 肾上腺素有什么用| human什么意思| 牙根疼吃什么药最好| 伥鬼是什么意思| microsd卡是什么卡| 女是念什么| 双甘油脂肪酸酯是什么| 大学毕业是什么学历| 是什么原因| 室颤是什么意思| guess是什么品牌| 金枝玉叶什么生肖| 胃火重吃什么药| 心慌是什么病| 神经性皮炎用什么药膏效果最好| 晴雨表是什么意思| 豆芽和什么一起炒好吃| 口上长水泡是什么原因| 5个月宝宝可以吃什么水果| 枯木逢春是什么生肖| 万宝龙属于什么档次| 男人吃洋葱有什么好处| 排卵期什么时候开始| 床头上面挂什么画好| 脚肿是什么原因| 过敏性哮喘吃什么药| 百合什么意思| 什么叫生化流产| 空腹不能吃什么| 血糖吃什么水果| 幽门螺旋杆菌是什么病| 女人补铁有什么好处| 入木三分什么意思| 买二手苹果手机要注意什么| 市政协秘书长是什么级别| 2月24日是什么星座| 一个鸟一个木念什么| 孕期便秘吃什么通便快| 鼻咽炎有什么症状| 什么是管状腺瘤| hpv感染有什么症状| 窈窕淑女是什么意思| 预防脑梗吃什么药| 幽门螺旋杆菌感染是什么意思| 猫咪弓背是什么原因| 梦见好多黄鳝是什么意思| 现在什么最赚钱| 生长激素分泌的高峰期是什么时候| 10.31什么星座| 双身什么意思| 镜花缘是什么意思| 硬卧代硬座是什么意思| 糜烂性胃炎吃什么药效果好| 心肌供血不足是什么原因造成的| 优雅是什么意思| 淋巴结肿大是什么引起的| 荷叶搭配什么一起喝减肥效果好| 7月27日什么星座| 囊性回声是什么意思| 户主有什么权利| 4月4日什么星座| 什么水越洗越脏| 膝盖肿是什么原因| 心境障碍是什么病| 985代表什么意思| 阳阴阳是什么卦| 711是什么星座| 6月16日是什么日子| 舌头麻木是什么原因引起| 50年属什么生肖| 25年是什么婚| 为什么会长瘤| 肚子不饿是什么原因| 年轻人长老年斑是什么原因| 4月20号是什么星座| 刮宫是什么意思| 时令水果是什么意思| 面霜和乳液有什么区别| 摩什么接什么| 推拿是什么| 返祖现象什么意思| 孕妇喝椰子水有什么好处| 艾滋病是一种什么病| 伟哥叫什么| 来例假吃什么水果好| 什么什么入胜| 深井冰是什么意思| 熬夜喝什么提神醒脑| 腿脚肿胀是什么原因引起的| 南枝是什么意思| 宝宝拉肚子吃什么药好| 副词是什么| 梦见抓蛇是什么预兆| 什么人不能吃西洋参| 益生菌什么时候吃好| 杨柳代表什么生肖| 息斯敏是什么药| 液氨是什么| 山见念什么| 带银饰有什么好处| 重日是什么意思| 长膘是什么意思| 什么花什么门的成语| 1936年属什么生肖| 老人头晕是什么原因引起的| 儒家思想的核心是什么| 梅子什么时候成熟| 肚子上面是什么部位| 笑气是什么气体| 女儿红是什么酒| 三观不正是什么意思| 体温偏低是什么原因| 下馆子什么意思| 宫颈管分离什么意思| 肝病晚期什么症状| 肠胃炎吃什么药效果好| 口腔溃疡吃什么维生素| 淀粉酶偏高是什么原因| 黄金为什么那么贵| 广东有什么城市| 被虫咬了挂什么科| tfboys是什么意思| 肚子疼去医院挂什么科| n是什么| 4个火读什么| 流火是什么原因造成的| 什么是酸性土壤| 皮肤软组织感染是什么意思| 肉蒲团是什么意思| 养什么能清理鱼缸粪便| 车抛锚是什么意思| 艾滋病通过什么传染| 什么什么万分| 8月17号是什么日子| 杀马特是什么| 右手发麻是什么原因| 积液是什么| 吃什么都吐是什么原因| 长期做梦是什么原因| 为什么脚上会长鸡眼| 恶心想吐是什么原因| 正团级是什么军衔| p波增宽是什么意思| cooc香水是什么牌子的| 枕芯是什么| 鼻子出汗是什么原因| 对什么什么感兴趣| 鱼鳞病是什么| 看破不说破什么意思| 脑梗的前兆是什么| 吃西洋参有什么好处| 胃疼屁多是什么原因| 脾虚生痰吃什么中成药| 牛肉不能和什么食物一起吃| 什么东西软化鱼刺最快| 大便里面有血是什么原因| 支气管炎吃什么药好得快| 爬山带什么食物比较好| 异常出汗是什么原因| 处暑的含义是什么意思| 陈赫的老婆叫什么名字| 脚心出汗是什么原因女| 为什么吐后反而舒服了| 什么石头最值钱| 豆芽菜是什么意思| 表达是什么意思| 山竹是什么| 协警是干什么的| 娥皇女英是什么意思| 吃什么可以提高代谢| 张信哲为什么不结婚| 痢疾吃什么药最有效| 榴莲什么时候吃是应季| 顺字五行属什么| 眼睛眼屎多是什么原因| 不明觉厉是什么意思| 检测怀孕最准确的方法是什么| 单位时间是什么意思| 石骨症是什么病| 33岁属什么| 康复治疗学是做什么的| 福德是什么意思| 喉咙有白痰是什么原因| 苹果绿是什么颜色| 什么叫刑事拘留| 钝是什么意思| 鲍鱼什么意思| 转氨酶高吃什么食物好| 吃什么食物能降低胆固醇| 白细胞异常是什么原因| 主动脉硬化吃什么药好| 肠胃炎需要做什么检查| 三个土念什么| 学考是什么| 牙龈发炎肿痛吃什么药| 升结肠ca是什么意思| 研讨会是什么意思| 阴道口出血是什么原因| 排卵试纸强阳说明什么| 便秘吃什么水果| 骨髓纤维化是什么病| 血脂稠吃什么药| 帆船像什么| 百度

河南洛阳:4月5日起部分牡丹园首次对中外游客免费

百度 主席团常务主席栗战书主持会议。

Long short-term memory (LSTM)[1] is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem[2] commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps (thus "long short-term memory").[1] The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century.

The long short-term memory (LSTM) cell can process data sequentially and keep its hidden state through time.

An LSTM unit is typically composed of a cell and three gates: an input gate, an output gate,[3] and a forget gate.[4] The cell remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.

LSTM has wide applications in classification,[5][6] data processing, time series analysis tasks,[7] speech recognition,[8][9] machine translation,[10][11] speech activity detection,[12] robot control,[13][14] video games,[15][16] healthcare.[17]

Motivation

edit

In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem.[18]

The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information.[4] In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies.[19] An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is.

Variants

edit

In the equations below, the lowercase variables represent vectors. Matrices ? and ? contain, respectively, the weights of the input and recurrent connections, where the subscript ? can either be the input gate ?, output gate ?, the forget gate ? or the memory cell ?, depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, ? is not just one unit of one LSTM cell, but contains ? LSTM cell's units.

See [20] for an empirical study of 8 architectural variants of LSTM.

LSTM with a forget gate

edit

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:[1][4]

?

where the initial values are ? and ? and the operator ? denotes the Hadamard product (element-wise product). The subscript ? indexes the time step.

Variables

edit

Letting the superscripts ? and ? refer to the number of input features and number of hidden units, respectively:

  • ?: input vector to the LSTM unit
  • ?: forget gate's activation vector
  • ?: input/update gate's activation vector
  • ?: output gate's activation vector
  • ?: hidden state vector also known as output vector of the LSTM unit
  • ?: cell input activation vector
  • ?: cell state vector
  • ?, ? and ?: weight matrices and bias vector parameters which need to be learned during training
  • ?: sigmoid function.
  • ?: hyperbolic tangent function.
  • ?: hyperbolic tangent function or, as the peephole LSTM paper[21][22] suggests, ?.

Peephole LSTM

edit
?
A peephole LSTM unit with input (i.e. ?), output (i.e. ?), and forget (i.e. ?) gates

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[21][22] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[21] ? is not used, ? is used instead in most places.

?

Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. ? and ? represent the activations of respectively the input, output and forget gates, at time step ?.

The 3 exit arrows from the memory cell ? to the 3 gates ? and ? represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell ? at time step ?, i.e. the contribution of ? (and not ?, as the picture may suggest). In other words, the gates ? and ? calculate their activations at time step ? (i.e., respectively, ? and ?) also considering the activation of the memory cell ? at time step ?, i.e. ?.

The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes ?.

The little circles containing a ? symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.

Peephole convolutional LSTM

edit

Peephole convolutional LSTM.[23] The ? denotes the convolution operator.

?

Training

edit

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to ? if the spectral radius of ? is smaller than 1.[2][24]

However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.

CTC score function

edit

Many applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[5] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

Alternatives

edit

Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution[7] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).

Applications

edit

Applications of LSTM include:

2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice.[50][51] According to the official blog post, the new model cut transcription errors by 49%.[52]

2016: Google started using an LSTM to suggest messages in the Allo conversation app.[53] In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.[10][54][55]

Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype[56][57][58] in the iPhone and for Siri.[59][60]

Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.[61]

2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.[11]

Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[62]

2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2,[15] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[14][63]

2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.[16][63]

History

edit

Development

edit

Aspects of LSTM were anticipated by "focused back-propagation" (Mozer, 1989),[64] cited by the LSTM paper.[1]

Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method.[2] His supervisor, Jürgen Schmidhuber, considered the thesis highly significant.[65]

An early version of LSTM was published in 1995 in a technical report by Sepp Hochreiter and Jürgen Schmidhuber,[66] then published in the NIPS 1996 conference.[3]

The most commonly used reference point for LSTM was published in 1997 in the journal Neural Computation.[1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.[20]

(Felix Gers, Jürgen Schmidhuber, and Fred Cummins, 1999)[67] introduced the forget gate (also called "keep gate") into the LSTM architecture in 1999, enabling the LSTM to reset its own state.[20] This is the most commonly used version of LSTM nowadays.

(Gers, Schmidhuber, and Cummins, 2000) added peephole connections.[21][22] Additionally, the output activation function was omitted.[20]

Development of variants

edit

(Graves, Fernandez, Gomez, and Schmidhuber, 2006)[5] introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.

(Graves, Schmidhuber, 2005)[26] published LSTM with full backpropagation through time and bidirectional LSTM.

(Kyunghyun Cho et al., 2014)[68] published a simplified variant of the forget gate LSTM[67] called Gated recurrent unit (GRU).

(Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber, 2015) used LSTM principles[67] to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[69][70][71] Concurrently, the ResNet architecture was developed. It is equivalent to an open-gated or gateless highway network.[72]

A modern upgrade of LSTM called xLSTM is published by a team led by Sepp Hochreiter (Maximilian et al, 2024).[73][74] One of the 2 blocks (mLSTM) of the architecture are parallelizable like the Transformer architecture, the other ones (sLSTM) allow state tracking.

Applications

edit

2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models.[21][63]

Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm).[75]

2004: First successful application of LSTM to speech Alex Graves et al.[76][63]

2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.[7]

Mayer et al. trained LSTM to control robots.[13]

2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.[77]

Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.[37]

2009: Justin Bayer et al. introduced neural architecture search for LSTM.[78][63]

2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves.[79] One was the most accurate model in the competition and another was the fastest.[80] This was the first time an RNN won international competitions.[63]

2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.[28]

2017: Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference.[81] Their time-aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.

See also

edit

References

edit
  1. ^ a b c d e Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID?9377276. S2CID?1915014.
  2. ^ a b c Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science.
  3. ^ a b Hochreiter, Sepp; Schmidhuber, Jürgen (2025-08-14). "LSTM can solve hard long time lag problems". Proceedings of the 9th International Conference on Neural Information Processing Systems. NIPS'96. Cambridge, MA, USA: MIT Press: 473–479.
  4. ^ a b c Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (10): 2451–2471. CiteSeerX?10.1.1.55.5709. doi:10.1162/089976600300015015. PMID?11032042. S2CID?11598600.
  5. ^ a b c Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Jürgen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX?10.1.1.75.6306.
  6. ^ Karim, Fazle; Majumdar, Somshubra; Darabi, Houshang; Chen, Shun (2018). "LSTM Fully Convolutional Networks for Time Series Classification". IEEE Access. 6: 1662–1669. arXiv:1709.05206. Bibcode:2018IEEEA...6.1662K. doi:10.1109/ACCESS.2017.2779939. ISSN?2169-3536.
  7. ^ a b c d Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). "Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning". Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858.
  8. ^ Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling" (PDF). Archived from the original (PDF) on 2025-08-14.
  9. ^ Li, Xiangang; Wu, Xihong (2025-08-14). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv:1410.4281 [cs.CL].
  10. ^ a b Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin (2025-08-14). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL].
  11. ^ a b Ong, Thuy (4 August 2017). "Facebook's translations are now powered completely by AI". www.allthingsdistributed.com. Retrieved 2025-08-14.
  12. ^ Sahidullah, Md; Patino, Jose; Cornell, Samuele; Yin, Ruiking; Sivasankaran, Sunit; Bredin, Herve; Korshunov, Pavel; Brutti, Alessio; Serizel, Romain; Vincent, Emmanuel; Evans, Nicholas; Marcel, Sebastien; Squartini, Stefano; Barras, Claude (2025-08-14). "The Speed Submission to DIHARD II: Contributions & Lessons Learned". arXiv:1911.02388 [eess.AS].
  13. ^ a b c Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). "A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks". 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp.?543–548. CiteSeerX?10.1.1.218.3399. doi:10.1109/IROS.2006.282190. ISBN?978-1-4244-0258-8. S2CID?12284900.
  14. ^ a b "Learning Dexterity". OpenAI. July 30, 2018. Retrieved 2025-08-14.
  15. ^ a b Rodriguez, Jesus (July 2, 2018). "The Science Behind OpenAI Five that just Produced One of the Greatest Breakthrough in the History of AI". Towards Data Science. Archived from the original on 2025-08-14. Retrieved 2025-08-14.
  16. ^ a b Stanford, Stacy (January 25, 2019). "DeepMind's AI, AlphaStar Showcases Significant Progress Towards AGI". Medium ML Memoirs. Retrieved 2025-08-14.
  17. ^ Schmidhuber, Jürgen (2021). "The 2010s: Our Decade of Deep Learning / Outlook on the 2020s". AI Blog. IDSIA, Switzerland. Retrieved 2025-08-14.
  18. ^ Calin, Ovidiu (14 February 2020). Deep Learning Architectures. Cham, Switzerland: Springer Nature. p.?555. ISBN?978-3-030-36720-6.
  19. ^ Lakretz, Yair; Kruszewski, German; Desbordes, Theo; Hupkes, Dieuwke; Dehaene, Stanislas; Baroni, Marco (2019), "The emergence of number and syntax units in", The emergence of number and syntax units (PDF), Association for Computational Linguistics, pp.?11–20, doi:10.18653/v1/N19-1002, hdl:11245.1/16cb6800-e10d-4166-8e0b-fed61ca6ebb4, S2CID?81978369
  20. ^ a b c d Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; Bas R. Steunebrink; Jürgen Schmidhuber (2015). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems. 28 (10): 2222–2232. arXiv:1503.04069. Bibcode:2015arXiv150304069G. doi:10.1109/TNNLS.2016.2582924. PMID?27411231. S2CID?3356463.
  21. ^ a b c d e f Gers, F. A.; Schmidhuber, J. (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages" (PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. PMID?18249962. S2CID?10192330. Archived from the original (PDF) on 2025-08-14.
  22. ^ a b c d Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). "Learning precise timing with LSTM recurrent networks" (PDF). Journal of Machine Learning Research. 3: 115–143.
  23. ^ Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-kin Wong; Wang-chun Woo (2015). "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting". Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. arXiv:1506.04214. Bibcode:2015arXiv150604214S.
  24. ^ Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)". In Kremer and, S. C.; Kolen, J. F. (eds.). A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press.
  25. ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in structured domains with hierarchical recurrent neural networks". Proc. 20th Int. Joint Conf. On Artificial Intelligence, Ijcai 2007: 774–779. CiteSeerX?10.1.1.79.1887.
  26. ^ a b Graves, A.; Schmidhuber, J. (2005). "Framewise phoneme classification with bidirectional LSTM and other neural network architectures". Neural Networks. 18 (5–6): 602–610. CiteSeerX?10.1.1.331.5800. doi:10.1016/j.neunet.2005.06.042. PMID?16112549. S2CID?1856462.
  27. ^ Fernández, S.; Graves, A.; Schmidhuber, J. (9 September 2007). "An Application of Recurrent Neural Networks to Discriminative Keyword Spotting". Proceedings of the 17th International Conference on Artificial Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag: 220–229. ISBN?978-3540746935. Retrieved 28 December 2023.
  28. ^ a b Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). "Speech recognition with deep recurrent neural networks". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp.?6645–6649. arXiv:1303.5778. doi:10.1109/ICASSP.2013.6638947. ISBN?978-1-4799-0356-6. S2CID?206741496.
  29. ^ Kratzert, Frederik; Klotz, Daniel; Shalev, Guy; Klambauer, Günter; Hochreiter, Sepp; Nearing, Grey (2025-08-14). "Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets". Hydrology and Earth System Sciences. 23 (12): 5089–5110. arXiv:1907.08456. Bibcode:2019HESS...23.5089K. doi:10.5194/hess-23-5089-2019. ISSN?1027-5606.
  30. ^ Eck, Douglas; Schmidhuber, Jürgen (2025-08-14). "Learning the Long-Term Structure of the Blues". Artificial Neural Networks — ICANN 2002. Lecture Notes in Computer Science. Vol.?2415. Springer, Berlin, Heidelberg. pp.?284–289. CiteSeerX?10.1.1.116.3620. doi:10.1007/3-540-46084-5_47. ISBN?978-3540460848.
  31. ^ Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). "Learning nonregular languages: A comparison of simple recurrent networks and LSTM". Neural Computation. 14 (9): 2039–2041. CiteSeerX?10.1.1.11.7369. doi:10.1162/089976602320263980. PMID?12184841. S2CID?30459046.
  32. ^ Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets". Neural Networks. 16 (2): 241–250. CiteSeerX?10.1.1.381.1992. doi:10.1016/s0893-6080(02)00219-8. PMID?12628609.
  33. ^ A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
  34. ^ Graves, A.; Fernández, S.; Liwicki, M.; Bunke, H.; Schmidhuber, J. (3 December 2007). "Unconstrained Online Handwriting Recognition with Recurrent Neural Networks". Proceedings of the 20th International Conference on Neural Information Processing Systems. NIPS'07. USA: Curran Associates Inc.: 577–584. ISBN?9781605603520. Retrieved 28 December 2023.
  35. ^ Baccouche, M.; Mamalet, F.; Wolf, C.; Garcia, C.; Baskurt, A. (2011). "Sequential Deep Learning for Human Action Recognition". In Salah, A. A.; Lepri, B. (eds.). 2nd International Workshop on Human Behavior Understanding (HBU). Lecture Notes in Computer Science. Vol.?7065. Amsterdam, Netherlands: Springer. pp.?29–39. doi:10.1007/978-3-642-25446-8_4. ISBN?978-3-642-25445-1.
  36. ^ Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2025-08-14). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv:1801.10111 [cs.CV].
  37. ^ a b Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). "Fast model-based protein homology detection without alignment". Bioinformatics. 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247. PMID?17488755.
  38. ^ Thireou, T.; Reczko, M. (2007). "Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID?17666763. S2CID?11787259.
  39. ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short Term Memory Networks for Anomaly Detection in Time Series" (PDF). European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015. Archived from the original (PDF) on 2025-08-14. Retrieved 2025-08-14.
  40. ^ Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). "Predictive Business Process Monitoring with LSTM Neural Networks". Advanced Information Systems Engineering. Lecture Notes in Computer Science. Vol.?10253. pp.?477–492. arXiv:1612.02130. doi:10.1007/978-3-319-59536-8_30. ISBN?978-3-319-59535-1. S2CID?2192354.
  41. ^ Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks". JMLR Workshop and Conference Proceedings. 56: 301–318. arXiv:1511.05942. Bibcode:2015arXiv151105942C. PMC?5341604. PMID?28286600.
  42. ^ Jia, Robin; Liang, Percy (2016). "Data Recombination for Neural Semantic Parsing". arXiv:1606.03622 [cs.CL].
  43. ^ Wang, Le; Duan, Xuhuan; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2025-08-14). "Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation" (PDF). Sensors. 18 (5): 1657. Bibcode:2018Senso..18.1657W. doi:10.3390/s18051657. ISSN?1424-8220. PMC?5982167. PMID?29789447.
  44. ^ Duan, Xuhuan; Wang, Le; Zhai, Changbo; Zheng, Nanning; Zhang, Qilin; Niu, Zhenxing; Hua, Gang (2018). "Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation". 2018 25th IEEE International Conference on Image Processing (ICIP). 25th IEEE International Conference on Image Processing (ICIP). pp.?918–922. doi:10.1109/icip.2018.8451692. ISBN?978-1-4799-7061-2.
  45. ^ Orsini, F.; Gastaldi, M.; Mantecchini, L.; Rossi, R. (2019). Neural networks trained with WiFi traces to predict airport passenger behavior. 6th International Conference on Models and Technologies for Intelligent Transportation Systems. Krakow: IEEE. arXiv:1910.14026. doi:10.1109/MTITS.2019.8883365. 8883365.
  46. ^ Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.Y.; Liu, J. (2017). "LSTM network: A deep learning approach for Short-term traffic forecast". IET Intelligent Transport Systems. 11 (2): 68–75. doi:10.1049/iet-its.2016.0208. S2CID?114567527.
  47. ^ Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G (2018). "Generative Recurrent Networks for De Novo Drug Design". Mol Inform. 37 (1–2). doi:10.1002/minf.201700111. PMC?5836943. PMID?29095571.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  48. ^ Saiful Islam, Md.; Hossain, Emam (2025-08-14). "Foreign Exchange Currency Rate Prediction using a GRU-LSTM Hybrid Network". Soft Computing Letters. 3: 100009. doi:10.1016/j.socl.2020.100009. ISSN?2666-2221.
  49. ^ Martin, Abbey; Hill, Andrew J.; Seiler, Konstantin M.; Balamurali, Mehala (2025-08-14). "Automatic excavator action recognition and localisation for untrimmed video using hybrid LSTM-Transformer networks". International Journal of Mining, Reclamation and Environment. 38 (5): 353–372. doi:10.1080/17480930.2023.2290364. ISSN?1748-0930.
  50. ^ Beaufays, Fran?oise (August 11, 2015). "The neural networks behind Google Voice transcription". Research Blog. Retrieved 2025-08-14.
  51. ^ Sak, Ha?im; Senior, Andrew; Rao, Kanishka; Beaufays, Fran?oise; Schalkwyk, Johan (September 24, 2015). "Google voice search: faster and more accurate". Research Blog. Retrieved 2025-08-14.
  52. ^ "Neon prescription... or rather, New transcription for Google Voice". Official Google Blog. 23 July 2015. Retrieved 2025-08-14.
  53. ^ Khaitan, Pranav (May 18, 2016). "Chat Smarter with Allo". Research Blog. Retrieved 2025-08-14.
  54. ^ Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever | WIRED". Wired. Retrieved 2025-08-14.
  55. ^ "A Neural Network for Machine Translation, at Production Scale". Google AI Blog. 27 September 2016. Retrieved 2025-08-14.
  56. ^ Efrati, Amir (June 13, 2016). "Apple's Machines Can Learn Too". The Information. Retrieved 2025-08-14.
  57. ^ Ranger, Steve (June 14, 2016). "iPhone, AI and big data: Here's how Apple plans to protect your privacy". ZDNet. Retrieved 2025-08-14.
  58. ^ "Can Global Semantic Context Improve Neural Language Models? – Apple". Apple Machine Learning Journal. Retrieved 2025-08-14.
  59. ^ Smith, Chris (2025-08-14). "iOS 10: Siri now works in third-party apps, comes with extra AI features". BGR. Retrieved 2025-08-14.
  60. ^ Capes, Tim; Coles, Paul; Conkie, Alistair; Golipour, Ladan; Hadjitarkhani, Abie; Hu, Qiong; Huddleston, Nancy; Hunt, Melvyn; Li, Jiangchuan; Neeracher, Matthias; Prahallad, Kishore (2025-08-14). "Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System". Interspeech 2017. ISCA: 4011–4015. doi:10.21437/Interspeech.2017-1798.
  61. ^ Vogels, Werner (30 November 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS. – All Things Distributed". www.allthingsdistributed.com. Retrieved 2025-08-14.
  62. ^ Xiong, W.; Wu, L.; Alleva, F.; Droppo, J.; Huang, X.; Stolcke, A. (April 2018). "The Microsoft 2017 Conversational Speech Recognition System". 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. pp.?5934–5938. arXiv:1708.06073. doi:10.1109/ICASSP.2018.8461870. ISBN?978-1-5386-4658-8.
  63. ^ a b c d e f Schmidhuber, Juergen (10 May 2021). "Deep Learning: Our Miraculous Year 1990-1991". arXiv:2005.05744 [cs.NE].
  64. ^ Mozer, Mike (1989). "A Focused Backpropagation Algorithm for Temporal Pattern Recognition". Complex Systems.
  65. ^ Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning". arXiv:2212.11279 [cs.NE].
  66. ^ Sepp Hochreiter; Jürgen Schmidhuber (21 August 1995), Long Short Term Memory, Wikidata?Q98967430
  67. ^ a b c Gers, Felix; Schmidhuber, Jürgen; Cummins, Fred (1999). "Learning to forget: Continual prediction with LSTM". 9th International Conference on Artificial Neural Networks: ICANN '99. Vol.?1999. pp.?850–855. doi:10.1049/cp:19991218. ISBN?0-85296-721-7.
  68. ^ Cho, Kyunghyun; van Merrienboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation". arXiv:1406.1078 [cs.CL].
  69. ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2 May 2015). "Highway Networks". arXiv:1505.00387 [cs.LG].
  70. ^ Srivastava, Rupesh K; Greff, Klaus; Schmidhuber, Juergen (2015). "Training Very Deep Networks". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.: 2377–2385.
  71. ^ Schmidhuber, Jürgen (2021). "The most cited neural networks all build on work done in my labs". AI Blog. IDSIA, Switzerland. Retrieved 2025-08-14.
  72. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Deep Residual Learning for Image Recognition". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp.?770–778. arXiv:1512.03385. doi:10.1109/CVPR.2016.90. ISBN?978-1-4673-8851-1.
  73. ^ Beck, Maximilian; P?ppel, Korbinian; Spanring, Markus; Auer, Andreas; Prudnikova, Oleksandra; Kopp, Michael; Klambauer, Günter; Brandstetter, Johannes; Hochreiter, Sepp (2025-08-14). "xLSTM: Extended Long Short-Term Memory". arXiv:2405.04517 [cs.LG].
  74. ^ NX-AI/xlstm, NXAI, 2025-08-14, retrieved 2025-08-14
  75. ^ Hochreiter, S.; Younger, A. S.; Conwell, P. R. (2001). "Learning to Learn Using Gradient Descent". Artificial Neural Networks — ICANN 2001 (PDF). Lecture Notes in Computer Science. Vol.?2130. pp.?87–94. CiteSeerX?10.1.1.5.323. doi:10.1007/3-540-44668-0_13. ISBN?978-3-540-42486-4. ISSN?0302-9743. S2CID?52872549.
  76. ^ Graves, Alex; Beringer, Nicole; Eck, Douglas; Schmidhuber, Juergen (2004). Biologically Plausible Speech Recognition with LSTM Neural Nets. Workshop on Biologically Inspired Approaches to Advanced Information Technology, Bio-ADIT 2004, Lausanne, Switzerland. pp.?175–184.
  77. ^ Wierstra, Daan; Foerster, Alexander; Peters, Jan; Schmidhuber, Juergen (2005). "Solving Deep Memory POMDPs with Recurrent Policy Gradients". International Conference on Artificial Neural Networks ICANN'07.
  78. ^ Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Juergen (2009). "Evolving memory cell structures for sequence learning". International Conference on Artificial Neural Networks ICANN'09, Cyprus.
  79. ^ Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868. CiteSeerX?10.1.1.139.4502. doi:10.1109/tpami.2008.137. ISSN?0162-8828. PMID?19299860. S2CID?14635907.
  80. ^ M?rgner, Volker; Abed, Haikal El (July 2009). "ICDAR 2009 Arabic Handwriting Recognition Competition". 2009 10th International Conference on Document Analysis and Recognition. pp.?1383–1387. doi:10.1109/ICDAR.2009.256. ISBN?978-1-4244-4500-4. S2CID?52851337.
  81. ^ Baytas, Inci M.; Xiao, Cao; Zhang, Xi; Wang, Fei; Jain, Anil K.; Zhou, Jiayu (2025-08-14). "Patient Subtyping via Time-Aware LSTM Networks". Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: Association for Computing Machinery. pp.?65–74. doi:10.1145/3097983.3097997. ISBN?978-1-4503-4887-4.

Further reading

edit
edit
公因数是什么意思 静脉曲张是什么引起的 尿道口下裂是什么样子 昙花什么时候开花 拉肚子吃什么药最有效
nda是什么意思 梦见一个小男孩是什么意思 梦见很多肉是什么意思 总放屁是什么病的前兆 吃核桃有什么好处
1912年属什么生肖 蜈蚣怕什么东西 西兰花是什么季节的蔬菜 胃发胀是什么原因 保卡是什么意思
结扎对男人有什么伤害 老梗是什么病 头孢克肟和头孢拉定有什么区别 性激素是什么意思 男生喜欢什么
semir是什么牌子hcv8jop0ns5r.cn 三点水加个真念什么hcv8jop1ns0r.cn 73年属什么生肖hcv8jop4ns3r.cn 什么是蜂胶bysq.com 菠菜和豆腐为什么不能一起吃adwl56.com
为什么运动完会恶心头晕想吐hcv9jop0ns5r.cn 胎位roa是什么意思hcv7jop9ns6r.cn 梅毒螺旋体抗体阴性是什么意思hcv8jop2ns6r.cn fm什么意思hcv8jop1ns0r.cn 株连九族是什么意思sscsqa.com
眼睛近视缺什么维生素hcv9jop3ns3r.cn 承受是什么意思hcv9jop2ns7r.cn 什么时候开始降温hcv9jop4ns2r.cn 化疗后白细胞低吃什么补得快96micro.com 梦见别人买房子是什么预兆zhiyanzhang.com
未病是什么意思youbangsi.com 嚷能组什么词hcv7jop7ns1r.cn 点痣不能吃什么东西hcv8jop9ns2r.cn 肛裂是什么症状hcv9jop6ns7r.cn 孕妇查凝血是检查什么hcv9jop1ns2r.cn
百度