[go: nahoru, domu]

MapReduce: Difference between revisions

Content deleted Content added
CM and MapReduce take same log(N) approach to reduce
m cite repair;
Line 188:
 
==Uses==
MapReduce is useful in a wide range of applications, including distributed pattern-based searching, distributed sorting, web link-graph reversal, Singular Value Decomposition,<ref>{{cite journalweb |last1=Bosagh Zadeh|first1=Reza|last2=Carlsson|first2=Gunnar|title=Dimension Independent Matrix Square Using MapReduce |website=Stanford University |url=https://stanford.edu/~rezab/papers/dimsum.pdf|access-date=12 July 2014|bibcode=2013arXiv1304.1467B|year=2013|arxiv=1304.1467}}</ref> web access log stats, [[inverted index]] construction, [[document clustering]], [[machine learning]],<ref name="mrml">{{cite web| url=http://www.willowgarage.com/map-reduce-machine-learning-multicore| title=Map-Reduce for Machine Learning on Multicore|first1=Andrew Y. |last1=Ng |first2=Gary |last2=Bradski |first3=Cheng-Tao |last3=Chu |first4=Kunle |last4=Olukotun |first5=Sang Kyun |last5=Kim |first6=Yi-An |last6=Lin |first7=YuanYuan |last7=Yu| publisher=NIPS 2006| year=2006}}</ref> and [[statistical machine translation]]. Moreover, the MapReduce model has been adapted to several computing environments like multi-core and many-core systems,<ref name="evalMR">{{Cite book | doi = 10.1109/HPCA.2007.346181| chapter = Evaluating MapReduce for Multi-core and Multiprocessor Systems| title = 2007 IEEE 13th International Symposium on High Performance Computer Architecture| pages = 13| year = 2007| last1 = Ranger | first1 = C. | last2 = Raghuraman | first2 = R. | last3 = Penmetsa | first3 = A. | last4 = Bradski | first4 = G. | last5 = Kozyrakis | first5 = C. | isbn = 978-1-4244-0804-7| citeseerx = 10.1.1.220.8210| s2cid = 12563671}}</ref><ref name="graphicsMR">{{Cite book | doi = 10.1145/1454115.1454152| chapter = Mars: a MapReduce framework on graphics processors| title = Proceedings of the 17th international conference on Parallel architectures and compilation techniques – PACT '08| pages = 260| year = 2008| last1 = He | first1 = B. | last2 = Fang | first2 = W. | last3 = Luo | first3 = Q. | last4 = Govindaraju | first4 = N. K. | chapter-url = http://wenbin.org/doc/papers/Wenbin08PACT.pdf| last5 = Wang | first5 = T. | isbn = 9781605582825| s2cid = 207169888}}</ref><ref name="tiledMR">{{Cite book | doi = 10.1145/1854273.1854337| chapter = Tiled-MapReduce: optimizing resource usages of data-parallel applications on multicore with tiling| title = Proceedings of the 19th international conference on Parallel architectures and compilation techniques – PACT '10| pages = 523| year = 2010| last1 = Chen | first1 = R. | last2 = Chen | first2 = H. | last3 = Zang | first3 = B. | isbn = 9781450301787| s2cid = 2082196}}</ref> desktop grids,<ref name="gridMR">{{Cite book | doi = 10.1109/3PGCIC.2010.33| chapter = Towards MapReduce for Desktop Grid Computing| title = 2010 International Conference on P2P, Parallel, Grid, Cloud and Internet Computing| pages = 193| year = 2010| last1 = Tang | first1 = B. | last2 = Moca | first2 = M. | last3 = Chevalier | first3 = S. | last4 = He | first4 = H. | last5 = Fedak | first5 = G. | chapter-url = http://graal.ens-lyon.fr/~gfedak/papers/xtremmapreduce.3pgcic10.pdf| isbn = 978-1-4244-8538-3| citeseerx = 10.1.1.671.2763| s2cid = 15044391}}</ref>
multi-cluster,<ref name="HMR">{{Cite book | doi = 10.1145/1996023.1996026| chapter = A Hierarchical Framework for Cross-Domain MapReduce Execution|chapter-url = http://yuanluo.net/publications/LUO_ECMLS2011.pdf| title = Proceedings of the second international workshop on Emerging computational methods for the life sciences (ECMLS '11)| year = 2011| last1 = Luo | first1 = Y. | last2 = Guo | first2 = Z. | last3 = Sun | first3 = Y.| last4 = Plale | first4 = B. | last5 = Qiu | first5 = J. | last6=Li|
first6=W. |isbn = 978-1-4503-0702-4| citeseerx = 10.1.1.364.9898| s2cid = 15179363}}</ref> volunteer computing environments,<ref name="volunteerMR">{{Cite book | doi = 10.1145/1851476.1851489| chapter = MOON: MapReduce On Opportunistic eNvironments| title = Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing – HPDC '10| pages = 95| year = 2010| last1 = Lin | first1 = H. | last2 = Ma | first2 = X. | last3 = Archuleta | first3 = J. | last4 = Feng | first4 = W. C. | last5 = Gardner | first5 = M. | last6 = Zhang | first6 = Z. | chapter-url = http://eprints.cs.vt.edu/archive/00001089/01/moon.pdf| isbn = 9781605589428| s2cid = 2351790}}</ref> dynamic cloud environments,<ref name="dynCloudMR">{{Cite journal| doi = 10.1016/j.jcss.2011.12.021| title = P2P-MapReduce: Parallel data processing in dynamic Cloud environments| journal = [[Journal of Computer and System Sciences]]| volume = 78| issue = 5| pages = 1382–1402| year = 2012| last1 = Marozzo| first1 = F.| last2 = Talia| first2 = D.| last3 = Trunfio| first3 = P.| doi-access = free}}</ref> mobile environments,<ref name="mobileMR">{{Cite book | doi = 10.1145/1839294.1839332| chapter = Misco: a MapReduce framework for mobile systems| title = Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments – PETRA '10| pages = 1| year = 2010| last1 = Dou | first1 = A. | last2 = Kalogeraki | first2 = V. | last3 = Gunopulos | first3 = D. | last4 = Mielikainen | first4 = T. | last5 = Tuulos | first5 = V. H. | isbn = 9781450300711| s2cid = 14517696}}</ref> and high-performance computing environments.<ref>{{cite book|chapter=Characterization and Optimization of Memory-Resident MapReduce on HPC Systems|publisher=IEEE|date=May 2014|doi=10.1109/IPDPS.2014.87|isbn=978-1-4799-3800-1|title=2014 IEEE 28th International Parallel and Distributed Processing Symposium|last1=Wang|first1=Yandong|last2=Goldstone|first2=Robin|last3=Yu|first3=Weikuan|last4=Wang|first4=Teng|pages=799–808|s2cid=11157612}}</ref>
 
At Google, MapReduce was used to completely regenerate Google's index of the [[World Wide Web]]. It replaced the old ''ad hoc'' programs that updated the index and ran the various analyses.<ref name="usage">{{cite web| quote=As of October, Google was running about 3,000 computing jobs per day through MapReduce, representing thousands of machine-days, according to a presentation by Dean. Among other things, these batch routines analyze the latest Web pages and update Google's indexes.| url=http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/5| title=How Google Works| publisher=baselinemag.com}}</ref> Development at Google has since moved on to technologies such as Percolator, FlumeJava<ref name="Chambers2010">{{cite book |last1=Chambers |first1=Craig |last2=Raniwala |first2=Ashish |last3=Perry |first3=Frances |last4=Adams |first4=Stephen |last5=Henry |first5=Robert R. |last6=Bradshaw |first6=Robert |last7=Weizenbaum |first7=Nathan |title=FlumeJava: Easy, Efficient Data-parallel Pipelines |journal=Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation |date=1 January 2010 |pages=363–375 |doi=10.1145/1806596.1806638 |url=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35650.pdf |access-date=4 August 2016 |isbn=9781450300193 |s2cid=14888571 |archive-url=https://web.archive.org/web/20160923141630/https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35650.pdf |archive-date=23 September 2016 |url-status=dead }}</ref> and [[Google MillWheel|MillWheel]] that offer streaming operation and updates instead of batch processing, to allow integrating "live" search results without rebuilding the complete index.<ref>Peng, D., & Dabek, F. (2010, October). Large-scale Incremental Processing Using Distributed Transactions and Notifications. In OSDI (Vol. 10, pp. 1-15).</ref>
 
MapReduce's stable inputs and outputs are usually stored in a [[distributed file system]]. The transient data are usually stored on local disk and fetched remotely by the reducers.
Line 205:
DeWitt and Stonebraker have subsequently published a detailed benchmark study in 2009 comparing performance of [[Hadoop|Hadoop's]] MapReduce and [[RDBMS]] approaches on several specific problems.<ref name="sigmod">{{cite web| url=http://database.cs.brown.edu/projects/mapreduce-vs-dbms| title=A Comparison of Approaches to Large-Scale Data Analysis|first1=Andrew |last1=Pavlo |first2=Erik |last2=Paulson |first3=Alexander |last3=Rasin |first4=Daniel J. |last4=Abadi |first5=Deavid J. |last5=DeWitt |first6=Samuel |last6=Madden |first7=Michael |last7=Stonebraker| publisher=Brown University| access-date=2010-01-11}}</ref> They concluded that relational databases offer real advantages for many kinds of data use, especially on complex processing or where the data is used across an enterprise, but that MapReduce may be easier for users to adopt for simple or one-time processing tasks.
 
The MapReduce programming paradigm was also described in [[Danny Hillis]]'s 1985 thesis <ref name="WDHmit86">{{cite book |author-first=W. Danny |author-last=Hillis |date=1986 |title=The Connection Machine |publisher=[[MIT Press]] |isbn=0262081571 |url-access=registration |url=https://archive.org/details/connectionmachin00hill }}</ref> intended for use on the [[Connection Machine]], where it was called "xapping/reduction"<ref>{{cite web |url=http://bitsavers.trailing-edge.com/pdf/thinkingMachines/CM2/HA87-4_Connection_Machine_Model_CM-2_Technical_Summary_Apr1987.pdf |title=Connection Machine Model CM-2 Technical Summary |author=<!--Not stated--> |date=1987-04-01 |publisher=[[Thinking Machines Corporation]] |access-date=2022-11-21}}</ref> and relied upon that machine's special hardware to accelerate both map and reduce. The dialect ultimately used for the Connection Machine, the 1986 [[StarLisp]], had parallel <code>*map</code> and <code>reduce!!</code>,<ref>{{cite web |url=https://www.softwarepreservation.org/projects/LISP/starlisp/supplement-to-the-starlisp-reference-manual-version-5-0.pdf |title=Supplement to the *Lisp Reference Manual |author=<!--Not stated--> |date=1988-09-01 |publisher=[[Thinking Machines Corporation]] |access-date=2022-11-21}}</ref> which in turn was based on the 1984 [[Common Lisp]], which had non-parallel <code>map</code> and <code>reduce</code> built in.<ref>{{cite web |url=https://collections.lib.utah.edu/dl_files/20/2e/202ebf04b52d043c78297444bc9bc4fbc17b6b5e.pdf |title=Rediflow Architecture Prospectus |author=<!--Not stated--> |date=1986-04-05 |publisher=[[University of Utah School of Computing|University of Utah Department of Computer Science]] |access-date=2022-11-21}}</ref> The [[binary tree]] approach that the Connection Machine's [[Hypercube_internetwork_topology|hypercube architecture]] uses to execute <code>reduce</code> in <math>O(\log n)</math> time<ref>{{cite webbook |url=https://www.cise.ufl.edu/~sahni/papers/imagemono.pdf#page=20 |title=Hypercube Algorithms for Image Processing and Pattern Recognition |last=Ranka |first=Sanjay |date=1989 |access-date=2022-12-08 |section=2.6 Data Sum |publisher=University of Florida}}</ref> is effectively the same as the approach referred to within the Google paper as prior work.{{r|GoogleMapReduce|p=11|q=an associative function can be computed over all prefixes of an N element array in log N time on N processors using parallel prefix computations. MapReduce can be considered a simplification and distillation of some of these models}}
 
In 2010 Google was granted what is described as a patent on MapReduce. The patent, filed in 2004, may cover use of MapReduce by open source software such as [[Hadoop]], [[CouchDB]], and others. In ''[[Ars Technica]]'', an editor acknowledged Google's role in popularizing the MapReduce concept, but questioned whether the patent was valid or novel.<ref>{{cite news |last1=Paul |first1=Ryan |title=Google's MapReduce patent: what does it mean for Hadoop? |url=https://arstechnica.com/information-technology/2010/01/googles-mapreduce-patent-what-does-it-mean-for-hadoop/ |access-date=21 March 2021 |work=Ars Technica |date=20 January 2010 |language=en-us}}</ref><ref name="patent">{{cite web|url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G&l=50&s1=7,650,331.PN.&OS=PN/7,650,331&RS=PN/7,650,331|title=United States Patent: 7650331 - System and method for efficient large-scale data processing|website=uspto.gov}}</ref> In 2013, as part of its "Open Patent Non-Assertion (OPN) Pledge", Google pledged to only use the patent defensively.<ref>{{cite news |last1=Nazer |first1=Daniel |title=Google Makes Open Patent Non-assertion Pledge and Proposes New Licensing Models |url=https://www.eff.org/deeplinks/2013/03/google-makes-open-patent-non-assertion-pledge |access-date=21 March 2021 |work=Electronic Frontier Foundation |date=28 March 2013 |language=en}}</ref><ref>{{cite news |last1=King |first1=Rachel |title=Google expands open patent pledge to 79 more about data center management |url=https://www.zdnet.com/article/google-expands-open-patent-pledge-to-79-more-about-data-center-management/ |access-date=21 March 2021 |work=ZDNet |date=2013 |language=en}}</ref> The patent is expected to expire on 23 December 2026.<ref>{{cite web |title=System and method for efficient large-scale data processing |url=https://patents.google.com/patent/US7650331B1/en |publisher=Google Patents Search |access-date=21 March 2021 |language=en |date=18 June 2004}}</ref>