[go: nahoru, domu]

MapReduce: Difference between revisions

Content deleted Content added
added Theoretical Background section
m template fixed
Line 1:
{{Short description|Parallel programming model}}
'''MapReduce''' is a [[programming model]] and an associated implementation for processing and generating [[big data]] sets with a [[Parallel computing|parallel]], [[distributed computing|distributed]] algorithm on a [[Cluster (computing)|cluster]].<ref>{{cite web|url=https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html|title=MapReduce Tutorial|access-date=3 July 2019|website=Apache Hadoop}}</ref><ref>{{cite web|url=http://news.cnet.com/8301-10784_3-9955184-7.html|title=Google spotlights data center inner workings|date=30 May 2008|website=cnet.com}}</ref><ref name="GoogleMapReduce">{{cite web|url=http://static.googleusercontent.com/media/research.google.com/es/us/archive/mapreduce-osdi04.pdf|title=MapReduce: Simplified Data Processing on Large Clusters|website=googleusercontent.com}}</ref>
 
A MapReduce program is composed of a [[map (parallel pattern)|''map'']] [[procedure (computing)|procedure]], which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and a ''[[Reduce (parallel pattern)|reduce]]'' method, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing by [[Marshalling (computer science)|marshalling]] the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for [[Redundancy (engineering)|redundancy]] and [[Fault-tolerant computer system|fault tolerance]].
Line 59:
===Examples===
The canonical MapReduce example counts the appearance of each word in a set of documents:<ref>{{cite web|url=http://research.google.com/archive/mapreduce-osdi04-slides/index-auto-0004.html|title=Example: Count word occurrences|publisher=Google Research|access-date=September 18, 2013}}</ref>
 
 
'''function''' <u>map</u>(String name, String document):
Line 107 ⟶ 106:
'''end function'''
 
Note that in the {{mono|Reduce}} function, {{mono|C}} is the count of people having in total N contacts, so in the {{mono|Map}} function it is natural to write {{mono|C=1}}, since every output pair is referring to the contacts of one single person.
 
The MapReduce system would line up the 1100 Map processors, and would provide each with its corresponding 1 million input records. The Map step would produce 1.1 billion {{mono|(Y,(N,1))}} records, with {{mono|Y}} values ranging between, say, 8 and 103. The MapReduce System would then line up the 96 Reduce processors by performing shuffling operation of the key/value pairs due to the fact that we need average per age, and provide each with its millions of corresponding input records. The Reduce step would result in the much reduced set of only 96 output records {{mono|(Y,A)}}, which would be put in the final result file, sorted by {{mono|Y}}.
Line 172 ⟶ 171:
==Theoretical Background==
 
Properties of [[Monoid]] are the basis for ensuring the validity of Map/Reduce operations.<ref>{{Cite journal
| doi = 10.1017/S09567968170001937
| title = An algebra for distributed Big Data analytics
Line 182 ⟶ 181:
| doi-access = free
| url = https://www.semanticscholar.org/paper/An-algebra-for-distributed-Big-Data-analytics-Fegaras/04983a1c0d6343e21129aaffca2a1b3eec419523?p2df
}}</ref> <ref>{{cite arXiv
|last=Lin
|first=Jimmy
|title=Monoidify! Monoids as a Design Principle for Efficient MapReduce Algorithms
|eprint=1304.7544
|date=29 Apr 2013}}</ref>.
 
In Algebird package<ref><<{{Cite web
|title= Abstract Algebra for Scala
|url=https://twitter.github.io/algebird/}}</ref> a Scala implementation of Map/Reduce explicitly requires Monoid class type
.<ref>{{Cite web
|title= Encoding Map-Reduce As A Monoid With Left Folding
|url= http://erikerlandson.github.io/blog/2016/09/05/expressing-map-reduce-as-a-left-folding-monoid/}}</ref>.
 
 
The operations of MapReduce deal with two types: the type ''A'' of input data being mapped, and the type ''B'' of output data being reduced.
Line 253 ⟶ 251:
DeWitt and Stonebraker have subsequently published a detailed benchmark study in 2009 comparing performance of [[Hadoop|Hadoop's]] MapReduce and [[RDBMS]] approaches on several specific problems.<ref name="sigmod">{{cite web| url=http://database.cs.brown.edu/projects/mapreduce-vs-dbms| title=A Comparison of Approaches to Large-Scale Data Analysis|first1=Andrew |last1=Pavlo |first2=Erik |last2=Paulson |first3=Alexander |last3=Rasin |first4=Daniel J. |last4=Abadi |first5=Deavid J. |last5=DeWitt |first6=Samuel |last6=Madden |first7=Michael |last7=Stonebraker| publisher=Brown University| access-date=2010-01-11}}</ref> They concluded that relational databases offer real advantages for many kinds of data use, especially on complex processing or where the data is used across an enterprise, but that MapReduce may be easier for users to adopt for simple or one-time processing tasks.
 
The MapReduce programming paradigm was also described in [[Danny Hillis]]'s 1985 thesis <ref name="WDHmit86">{{cite book |author-first=W. Danny |author-last=Hillis |date=1986 |title=The Connection Machine |publisher=[[MIT Press]] |isbn=0262081571 |url-access=registration |url=https://archive.org/details/connectionmachin00hill }}</ref> intended for use on the [[Connection Machine]], where it was called "xapping/reduction"<ref>{{cite web |url=http://bitsavers.trailing-edge.com/pdf/thinkingMachines/CM2/HA87-4_Connection_Machine_Model_CM-2_Technical_Summary_Apr1987.pdf |title=Connection Machine Model CM-2 Technical Summary |author=<!--Not stated--> |date=1987-04-01 |publisher=[[Thinking Machines Corporation]] |access-date=2022-11-21}}</ref> and relied upon that machine's special hardware to accelerate both map and reduce. The dialect ultimately used for the Connection Machine, the 1986 [[StarLisp]], had parallel <code>*map</code> and <code>reduce!!</code>,<ref>{{cite web |url=https://www.softwarepreservation.org/projects/LISP/starlisp/supplement-to-the-starlisp-reference-manual-version-5-0.pdf |title=Supplement to the *Lisp Reference Manual |author=<!--Not stated--> |date=1988-09-01 |publisher=[[Thinking Machines Corporation]] |access-date=2022-11-21}}</ref> which in turn was based on the 1984 [[Common Lisp]], which had non-parallel <code>map</code> and <code>reduce</code> built in.<ref>{{cite web |url=https://collections.lib.utah.edu/dl_files/20/2e/202ebf04b52d043c78297444bc9bc4fbc17b6b5e.pdf |title=Rediflow Architecture Prospectus |author=<!--Not stated--> |date=1986-04-05 |publisher=[[University of Utah School of Computing|University of Utah Department of Computer Science]] |access-date=2022-11-21}}</ref> The [[Fold (higher-order function)#Linear vs. tree-like folds|tree-like]] approach that the Connection Machine's [[Hypercube_internetwork_topologyHypercube internetwork topology|hypercube architecture]] uses to execute <code>reduce</code> in <math>O(\log n)</math> time<ref>{{cite book |url=https://www.cise.ufl.edu/~sahni/papers/imagemono.pdf#page=20 |title=Hypercube Algorithms for Image Processing and Pattern Recognition |last=Ranka |first=Sanjay |date=1989 |access-date=2022-12-08 |section=2.6 Data Sum |publisher=University of Florida}}</ref> is effectively the same as the approach referred to within the Google paper as prior work.{{r|GoogleMapReduce|p=11|q=an associative function can be computed over all prefixes of an N element array in log N time on N processors using parallel prefix computations. MapReduce can be considered a simplification and distillation of some of these models}}
 
In 2010 Google was granted what is described as a patent on MapReduce. The patent, filed in 2004, may cover use of MapReduce by open source software such as [[Hadoop]], [[CouchDB]], and others. In ''[[Ars Technica]]'', an editor acknowledged Google's role in popularizing the MapReduce concept, but questioned whether the patent was valid or novel.<ref>{{cite news |last1=Paul |first1=Ryan |title=Google's MapReduce patent: what does it mean for Hadoop? |url=https://arstechnica.com/information-technology/2010/01/googles-mapreduce-patent-what-does-it-mean-for-hadoop/ |access-date=21 March 2021 |work=Ars Technica |date=20 January 2010 |language=en-us}}</ref><ref name="patent">{{cite web|url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G&l=50&s1=7,650,331.PN.&OS=PN/7,650,331&RS=PN/7,650,331|title=United States Patent: 7650331 - System and method for efficient large-scale data processing|website=uspto.gov}}</ref> In 2013, as part of its "Open Patent Non-Assertion (OPN) Pledge", Google pledged to only use the patent defensively.<ref>{{cite news |last1=Nazer |first1=Daniel |title=Google Makes Open Patent Non-assertion Pledge and Proposes New Licensing Models |url=https://www.eff.org/deeplinks/2013/03/google-makes-open-patent-non-assertion-pledge |access-date=21 March 2021 |work=Electronic Frontier Foundation |date=28 March 2013 |language=en}}</ref><ref>{{cite news |last1=King |first1=Rachel |title=Google expands open patent pledge to 79 more about data center management |url=https://www.zdnet.com/article/google-expands-open-patent-pledge-to-79-more-about-data-center-management/ |access-date=21 March 2021 |work=ZDNet |date=2013 |language=en}}</ref> The patent is expected to expire on 23 December 2026.<ref>{{cite web |title=System and method for efficient large-scale data processing |url=https://patents.google.com/patent/US7650331B1/en |publisher=Google Patents Search |access-date=21 March 2021 |language=en |date=18 June 2004}}</ref>
 
===Restricted programming framework===
MapReduce tasks must be written as acyclic dataflow programs, i.e. a stateless mapper followed by a stateless reducer, that are executed by a batch job scheduler. This paradigm makes repeated querying of datasets difficult and imposes limitations that are felt in fields such as [[Graph (abstract data type)|graph]] processing<ref>{{cite conference |url=https://csc.csudh.edu/btang/seminar/papers/BigD399.pdf |title=Map-Based Graph Analysis on MapReduce |last1=Gupta |first1=Upa |last2=Fegaras |first2=Leonidas |date=2013-10-06 |publisher=[[IEEE]] |book-title=Proceedings: 2013 IEEE International Conference on Big Data |pages=24-3024–30 |location=[[Santa Clara, California]] |conference=2013 IEEE International Conference on Big Data}}</ref> where iterative algorithms that revisit a single [[working set]] multiple times are the norm, as well as, in the presence of [[Hard disk drive|disk]]-based data with high [[Latency (engineering)#Mechanics|latency]], even the field of [[machine learning]] where multiple passes through the data are required even though algorithms can tolerate serial access to the data each pass.<ref>{{cite conference |first1=Matei| last1=Zaharia| first2=Mosharaf |last2=Chowdhury| first3=Michael| last3=Franklin| first4=Scott| last4=Shenker| first5=Ion| last5=Stoica |title=Spark: Cluster Computing with Working Sets |url=https://amplab.cs.berkeley.edu/wp-content/uploads/2011/06/Spark-Cluster-Computing-with-Working-Sets.pdf |conference=HotCloud 2010|date=June 2010}}</ref>
 
==See also==