zara属于什么档次| 瞒天过海是什么意思| 小孩吃火龙果有什么好处| 10月什么星座| 甘草有什么功效| 瓜子脸适合什么刘海| 香椿是什么| 什么妖魔鬼怪什么美女画皮| 红花泡脚有什么好处| 窒息什么意思| 吃什么补心脏供血不足| 女人吃槐花有什么好处| 牡丹花代表什么生肖| 西瓜和什么不能一起吃| 如是观是什么意思| 三亚在海南的什么位置| 补钙吃什么好| 控制欲强的人最怕什么| 内在美是什么意思| 瘦脱相是什么意思| 牙龈发黑是什么原因| 土龙是什么| 6.29什么星座| 籍贯一般填什么| 汉卿是什么意思| amy什么意思| 文牍是什么意思| 霉菌性阴道炎用什么药最好| 金樱子泡酒有什么功效| 前纵韧带钙化是什么意思| 脚底脱皮是什么原因| 一个日一个斤念什么| 臆想症是什么| 男性结扎是什么意思| bv是什么意思| 早搏是什么| 孕妇怕冷是什么原因| 机油什么牌子的好| 10.25是什么星座| 妈妈的弟弟的老婆叫什么| 陕西有什么烟| 吃什么提高免疫力最快| 吃什么药能延迟射精| 山茶花什么颜色| 打嗝是什么毛病| 女生经常手淫有什么危害| 儿童身份证需要什么材料| 什么是血沉| 血糖高可以吃什么水果| 纪检是干什么的| 鱼腥草长什么样| 山东为什么简称鲁| 子宫内膜炎是什么原因造成的| 仓鼠不能吃什么| gr是什么元素| 冷暴力是什么| 党按照什么的原则选拔干部| 猪肝可以钓什么鱼| 人事代理什么意思| 75年属什么| 儿童肥胖挂什么科| 为什么会得荨麻疹呢| 乌龟为什么喜欢叠罗汉| 公积金缴存基数是什么意思| 姑娘是什么意思| 抹茶绿配什么颜色好看| 妃是什么意思| 双肺纹理增重是什么意思| 2003属什么| 病毒发烧吃什么药| 鸡奸是什么意思| 生孩子送什么花比较好| 淋巴细胞是什么意思| 四月初十是什么星座| 四面楚歌什么意思| 上腹部饱胀是什么原因| 梦魇是什么| 嘴发麻是什么原因引起的| 浓茶喝多了有什么危害| 什么样的情况下需要做肠镜| 排卵试纸两条杠是什么意思| 什么东西倒立后会增加一半| 中医七情指的是什么| 银杯子喝水有什么好处与坏处| 因势利导什么意思| 胆大包天是什么生肖| 什么是膳食纤维| 黄眉大王是什么妖怪| 梦见吃月饼是什么意思| 晚上11点是什么时辰| 人体缺钠会出现什么症状| 外阴灼热用什么药| 心理障碍是什么病| 如鱼得水是什么意思| 什么的游泳| 什么情况下用妇炎洁| 什么叫渣男| 骨髓穿刺能查出什么病| 早上10点是什么时辰| 72年鼠是什么命| 说什么| 新生儿足底采血检查什么项目| 阴火是什么意思| plt是什么| 属鸡的适合干什么行业最赚钱| 哆啦a梦大结局是什么| 高温丝假发是什么材质| 老人适合吃什么水果| 胆囊炎吃什么药好| 大人发烧吃什么退烧药| 凝血因子是什么| pbg是什么意思| 长鸡眼是什么原因| 2.16是什么星座| 花生属于什么类| 7.9什么星座| 小鬼是什么意思| 电是什么时候发明的| 绿豆芽炒什么好吃| 复光是什么意思| 右眼跳是什么意思| 忘忧草是什么意思| 好梦是什么意思| 口臭应该挂什么科| 卵圆孔未闭是什么意思| 为什么大熊猫是国宝| 2002是什么年| 高血压早餐吃什么好| surprise什么意思| 荨麻疹能吃什么| 额额是什么意思| 多多益善什么意思| 车震是什么| 十加一笔是什么字| 喝酒后呕吐是什么原因| 脚气看什么科| 梦见好多蛇是什么预兆| 为什么口水是臭的| 眼袋是什么原因引起的| pct什么意思| 口腔癌早期有什么征兆| 大姨妈来了吃什么对身体好| 属鸡本命佛是什么佛| 吃姜对身体有什么好处| 混合痔是什么| 黄梅时节是什么季节| 核素治疗是什么| 日光性皮炎用什么药膏| 白带清洁度lv是什么意思| 肠胃炎能吃什么水果| 一个益一个蜀念什么| 早上五点是什么时辰| 孕妇吃鸽子蛋对胎儿有什么好处| 脖子长痘痘是因为什么原因| 杨枝甘露是什么做的| 感冒头疼吃什么药好| 地盆是一种什么病| 煮毛豆放什么调料| 低蛋白血症吃什么最快| 排骨炖什么汤止咳润肺| 脸发红发痒是什么原因| 靠北是什么意思| 驻马店以前叫什么名字| 雄鱼是什么鱼| 喝完酒胃疼吃什么药| 海米是什么| 三七粉主治什么| 为什么会有黑眼圈| 痰多吃什么化痰| 孕晚期吃什么好| aigle是什么牌子| 尿频尿急尿痛吃什么药| 槐树什么时候开花| 尿道感染吃什么药好得快| 胃酸过多是什么原因造成的| 早晨嘴苦是什么原因引起的| nsaid是什么药| 湖南什么山最出名| 什么是灰指甲| 二五八万是什么意思| 澳门有什么好玩的地方| 帝王是什么意思| 为什么招蚊子咬| 亦木读什么| 藏红花有什么作用| 兔子爱吃什么| 前列腺增大是什么原因| bunny是什么意思| 献完血应该注意什么| 古代天花是现代什么病| 事物指的是什么| DNA是什么意思啊| bp是什么意思医学上面| 推举是什么意思| 梅核气是什么症状| 褪黑素不能和什么一起吃| 球镜柱镜是什么意思| 靠山是什么意思| 什么是幼小衔接| 青蒜炒什么好吃| 血脂高是什么原因引起| 实质性结节是什么意思| 木乐读什么| 锡纸什么牌子的好| 前列腺炎需要做什么检查| 18k黄金是什么意思| 口水多是什么原因引起的| 属猪五行属什么| 天亮是什么时辰| 三个白念什么| 肝脑涂地是什么意思| 体外射精是什么意思| 为什么总是耳鸣| 小孩肚子疼拉肚子吃什么药| 子宫内膜厚什么原因引起的| 阳历10月是什么星座| 什么品牌的空气炸锅好| 羊水什么颜色| 丰富是什么意思| 来大姨妈吃什么水果好| 经常干咳嗽是什么原因| 右边脸颊长痘是什么原因| 初心不改是什么意思| 尖锐湿疣的症状是什么| 非虫念什么| 女性肠痉挛有什么症状| 经常眨眼睛是什么原因| 小孩脚底脱皮是什么原因造成的| 阿尔兹海默症是什么病| pyq是什么意思| 什么是同素异形体| 胃饱胀是什么原因| 空鼻症是什么症状| 心率低有什么危害| 睡觉起来头晕什么原因| 爸爸的姥姥叫什么| 九知道指的是什么| 血小板减少是什么原因| 骨质密度增高是什么意思| 海星吃什么食物| 球蛋白的功效与作用是什么| 第57个民族是什么民族| 碱中毒是什么引起的| 希五行属什么| 红色属于五行属什么| 77年什么命| oh什么意思| 淋巴细胞绝对值偏低说明什么| 垢是什么意思| 吹牛皮是什么意思| 毛肚是什么东西| 胃火旺吃什么好| 粉刺是什么样的图片| idh是什么意思| 梦见死了人是什么意思| 不来姨妈挂什么科| 12月14日是什么星座| 阿华田是什么饮料| alexanderwang是什么牌子| 大象的鼻子有什么作用| 吃什么能提高血压| 流感吃什么药| 看病人买什么水果| 女生下面叫什么| 百度

台湾禽流感疫情蔓延 宜兰县又扑杀5000多只樱

百度 传承之美:让非遗活起来当天精彩纷呈的非遗现场秀,博得了在场嘉宾的一致喝彩,也引发了他们对于探寻中国非物质文化遗产的浓厚兴趣。

Friendly artificial intelligence (friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Etymology and usage

edit
 
Eliezer Yudkowsky, AI researcher and creator of the term

The term was coined by Eliezer Yudkowsky,[1] who is best known for popularizing the idea,[2][3] to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:[2]

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

"Friendly" is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.[4]

Risks of unfriendly AI

edit

The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict.[5] By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics"—principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm.[6]

In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'

In 2008, Eliezer Yudkowsky called for the creation of "friendly AI" to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[7]

Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.[8][9]

Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.[10][11]

Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.[12]

In 2014, Luke Muehlhauser and Nick Bostrom underlined the need for 'friendly AI';[13] nonetheless, the difficulties in designing a 'friendly' superintelligence, for instance via programming counterfactual moral thinking, are considerable.[14][15]

Coherent extrapolated volition

edit

Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted".[16]

Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI that humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.[16] The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

Other approaches

edit

Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.[17]

Seth Baum argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs".[18]

In his book Human Compatible, AI researcher Stuart J. Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:[19]:?173?

  1. The machine's only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future."[19]:?173? Similarly, "behavior" includes any choice between options,[19]:?177? and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.[19]:?201?

Public policy

edit

James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.[17]

John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.[20]

Criticism

edit

Some critics believe that both human-level AI and superintelligence are unlikely and that, therefore, friendly AI is unlikely. Writing in The Guardian, Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.[21] Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and Nick Bostrom’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that human beings would have had.[13] In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.[14]

Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.[22] Other critics question whether artificial intelligence can be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible ever to guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes but certainty and consensus on how one values the different outcomes.[23]

The inner workings of advanced AI systems may be complex and difficult to interpret, leading to concerns about transparency and accountability.[24]

See also

edit

References

edit
  1. ^ Tegmark, Max (2014). "Life, Our Universe and Everything". Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (First ed.). Knopf Doubleday Publishing. ISBN 9780307744258. Its owner may cede control to what Eliezer Yudkowsky terms a "Friendly AI,"...
  2. ^ a b Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  3. ^ Leighton, Jonathan (2011). The Battle for Compassion: Ethics in an Apathetic Universe. Algora. ISBN 978-0-87586-870-7.
  4. ^ Wallach, Wendell; Allen, Colin (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Inc. ISBN 978-0-19-537404-9.
  5. ^ Kevin LaGrandeur (2011). "The Persistent Peril of the Artificial Slave". Science Fiction Studies. 38 (2): 232. doi:10.5621/sciefictstud.38.2.0232. Archived from the original on January 13, 2023. Retrieved May 6, 2013.
  6. ^ Isaac Asimov (1964). "Introduction". The Rest of the Robots. Doubleday. ISBN 0-385-09041-2. {{cite book}}: ISBN / Date incompatibility (help)
  7. ^ Eliezer Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Nick Bostrom; Milan M. ?irkovi? (eds.). Global Catastrophic Risks. pp. 308–345. Archived (PDF) from the original on October 19, 2013. Retrieved October 19, 2013.
  8. ^ Omohundro, S. M. (February 2008). "The basic AI drives". Artificial General Intelligence. 171: 483–492. CiteSeerX 10.1.1.393.8356.
  9. ^ Bostrom, Nick (2014). "Chapter 7: The Superintelligent Will". Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. ISBN 9780199678112.
  10. ^ Dvorsky, George (April 26, 2013). "How Skynet Might Emerge From Simple Physics". Gizmodo. Archived from the original on October 8, 2021. Retrieved December 23, 2021.
  11. ^ Wissner-Gross, A. D.; Freer, C. E. (2013). "Causal entropic forces". Physical Review Letters. 110 (16): 168702. Bibcode:2013PhRvL.110p8702W. doi:10.1103/PhysRevLett.110.168702. hdl:1721.1/79750. PMID 23679649.
  12. ^ Muehlhauser, Luke (July 31, 2013). "AI Risk and the Security Mindset". Machine Intelligence Research Institute. Archived from the original on July 19, 2014. Retrieved July 15, 2014.
  13. ^ a b Muehlhauser, Luke; Bostrom, Nick (December 17, 2013). "Why We Need Friendly AI". Think. 13 (36): 41–47. doi:10.1017/s1477175613000316. ISSN 1477-1756. S2CID 143657841.
  14. ^ a b Boyles, Robert James M.; Joaquin, Jeremiah Joven (July 23, 2019). "Why friendly AIs won't be that friendly: a friendly reply to Muehlhauser and Bostrom". AI & Society. 35 (2): 505–507. doi:10.1007/s00146-019-00903-0. ISSN 0951-5666. S2CID 198190745.
  15. ^ Chan, Berman (March 4, 2020). "The rise of artificial intelligence and the crisis of moral passivity". AI & Society. 35 (4): 991–993. doi:10.1007/s00146-020-00953-9. ISSN 1435-5655. S2CID 212407078. Archived from the original on February 10, 2023. Retrieved January 21, 2023.
  16. ^ a b Eliezer Yudkowsky (2004). "Coherent Extrapolated Volition" (PDF). Singularity Institute for Artificial Intelligence. Archived (PDF) from the original on September 30, 2015. Retrieved September 12, 2015.
  17. ^ a b Hendry, Erica R. (January 21, 2014). "What Happens When Artificial Intelligence Turns On Us?". Smithsonian Magazine. Archived from the original on July 19, 2014. Retrieved July 15, 2014.
  18. ^ Baum, Seth D. (September 28, 2016). "On the promotion of safe and socially beneficial artificial intelligence". AI & Society. 32 (4): 543–551. doi:10.1007/s00146-016-0677-0. ISSN 0951-5666. S2CID 29012168.
  19. ^ a b c d Russell, Stuart (October 8, 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  20. ^ McGinnis, John O. (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Archived from the original on December 1, 2014. Retrieved July 16, 2014.
  21. ^ Winfield, Alan (August 9, 2014). "Artificial intelligence will not turn into a Frankenstein's monster". The Guardian. Archived from the original on September 17, 2014. Retrieved September 17, 2014.
  22. ^ Kornai, András (May 15, 2014). "Bounding the impact of AGI". Journal of Experimental & Theoretical Artificial Intelligence. 26 (3). Informa UK Limited: 417–438. doi:10.1080/0952813x.2014.895109. ISSN 0952-813X. S2CID 7067517. ...the essence of AGIs is their reasoning facilities, and it is the very logic of their being that will compel them to behave in a moral fashion... The real nightmare scenario (is one where) humans find it advantageous to strongly couple themselves to AGIs, with no guarantees against self-deception.
  23. ^ Keiper, Adam; Schulman, Ari N. (Summer 2011). "The Problem with 'Friendly' Artificial Intelligence". The New Atlantis. No. 32. pp. 80–89. Archived from the original on January 15, 2012. Retrieved January 16, 2012.
  24. ^ Norvig, Peter; Russell, Stuart (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Pearson. ISBN 978-0136042594.

Further reading

edit
  • Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, Oxford University Press.
    Discusses Artificial Intelligence from the perspective of Existential risk. In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5. Section 6 gives two classes of mistakes (technical and philosophical) which would both lead to the accidental creation of non-Friendly AIs. Sections 7-13 discuss further related issues.
  • Omohundro, S. (2008). The Basic AI Drives Appeared in AGI-08 – Proceedings of the First Conference on Artificial General Intelligence.
  • Mason, C. (2008). Human-Level AI Requires Compassionate Intelligence Archived 2025-08-06 at the Wayback Machine Appears in AAAI 2008 Workshop on Meta-Reasoning: Thinking About Thinking.
  • Froding, B. and Peterson, M. (2021). Friendly AI Ethics and Information Technology, Vol. 23, pp. 207–214.
edit
10.17是什么星座 蓝黑色是什么颜色 失眠为什么开奥氮平片 罗贯中是什么朝代的 湿热重吃什么药
白蜜是什么 痔疮是什么科室看的 男人为什么会遗精 地球为什么自转 面条是什么做的
犹太人说什么语言 小孩子流鼻血是什么原因 早晨起来口干舌燥是什么原因 过期的维生素e有什么用途 眼干是什么原因引起的
类风湿性关节炎吃什么药 萧何字什么 砧木是什么意思 慢性结肠炎用什么药 盆腔积液是什么症状
武汉属于什么地区hcv7jop4ns8r.cn 茄子与什么相克hcv8jop1ns3r.cn 蛀牙是什么原因引起的520myf.com 游坦之练的什么武功hcv8jop9ns7r.cn 小孩突然抽搐失去意识是什么原因hcv8jop1ns9r.cn
韩红什么军衔1949doufunao.com 甲沟炎用什么药好hcv8jop9ns1r.cn 小孩积食吃什么hcv9jop7ns4r.cn 晚上蝴蝶来家什么预兆dayuxmw.com 验孕棒什么时候测准确kuyehao.com
儿童肥胖挂什么科hcv9jop5ns3r.cn 什么叫精索静脉曲张啊hcv9jop0ns4r.cn 圣贤是什么意思hcv7jop9ns0r.cn 全身而退是什么意思huizhijixie.com 炒菜用什么油最健康hcv9jop0ns7r.cn
mic是什么hcv9jop1ns9r.cn 泡酒用什么酒好hcv7jop4ns6r.cn 护理部是干什么的inbungee.com 满清是什么民族hcv9jop0ns9r.cn 屁多是什么原因造成的hcv7jop4ns7r.cn
百度