[go: nahoru, domu]

Dataflow architecture: Difference between revisions

Content deleted Content added
m linking
 
(40 intermediate revisions by 28 users not shown)
Line 1:
{{Short description|Type of low-level computer architecture}}
'''Dataflow architecture''' is a [[computer architecture]] that directly contrasts the traditional [[von Neumann architecture]] or [[control flow]] architecture. Dataflow architectures do not have a [[program counter]], or (at least conceptually) the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions, so that the order of instruction execution is unpredictable: i. e. behavior is indeterministic.
{{refimproveMore citations needed|date=August 2012}}
'''Dataflow architecture''' is a [[dataflow]]-based [[computer architecture]] that directly contrasts the traditional [[von Neumann architecture]] or [[control flow]] architecture. Dataflow architectures do not have ano [[program counter]], orin (at least conceptually)concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions,<ref name=architecture>{{cite journal |last=Veen |first=Arthur H. |date=December 1986 |title=Dataflow Machine Architecture |journal=ACM Computing Surveys |volume=18 |number=4 |pages=365–396 |url=https://www.researchgate.net/publication/220566271 |access-date=5 March 2019 |doi=10.1145/27633.28055 |s2cid=5467025}}</ref> so that the order of instruction execution ismay unpredictable:be i.hard e. behavior isto indeterministicpredict.
 
Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as in [[digital signal processing]], [[network routing]], [[graphics processing]], [[telemetry]], and more recently in data warehousing, and [[artificial intelligence]] (as: polymorphic dataflow<ref>{{Cite news |last=Maxfield |first=Max |date=24 December 2020 |title=Say Hello to Deep Vision's Polymorphic Dataflow Architecture |work=Electronic Engineering Journal |publisher=Techfocus media}}</ref> Convolution Engine,<ref>{{cite web |url=https://kinara.ai/<!-- Prior: https://deepvision.io/ --> |title=Kinara (formerly Deep Vision) |author=<!-- Unstated --> |date=2022 |website=Kinara |access-date=2022-12-11}}</ref> structure-driven,<ref>{{cite web |url=https://hailo.ai/ |title=Hailo |author=<!-- Unstated --> |date=<!-- Undated --> |website=Hailo |access-date=2022-12-11}}</ref> dataflow [[Scheduling (computing)|scheduling]]<ref>{{Cite report |last=Lie |first=Sean |date=29 August 2022 |url=https://www.cerebras.net/blog/cerebras-architecture-deep-dive-first-look-inside-the-hw/sw-co-design-for-deep-learning |title=Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning |website=Cerebras}}</ref>). It is also very relevant in many software architectures today including [[database]] engine designs and [[parallel computing]] frameworks.{{Citation needed|date=March 2015}}
 
Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processor [[Load balancing (computing)|load balancing]], synchronization and accesses to common resources.<ref name="EN-Genius">[http://www.en-genius.net/site/zones/networkZONE/product_reviews/netp_061608{{cite "press release |date=June 18, 2008 |title=HX300 Family of NPUs and Programmable Ethernet Switches to the Fiber Access Market", ''|url=http://www.en-genius.net/site/zones/networkZONE/product_reviews/netp_061608 |website=EN-Genius'', June|url-status=dead 18 2008]|archive-url=https://web.archive.org/web/20110722151409/http://www.en-genius.net/site/zones/networkZONE/product_reviews/netp_061608 |archive-date=2011-07-22}}</ref>
 
Meanwhile, there is a clash of terminology, since the term ''[[Dataflowdataflow]]'' is used for a subarea of parallel programming: for [[dataflow programming]].
 
== History ==
Hardware architectures for dataflow was a major topic in [[computer architecture]] research in the 1970s and early 1980s. [[Jack Dennis]] of [[Massachusetts Institute of Technology|MIT]] pioneered the field of static dataflow architectures while the Manchester Dataflow Machine<ref name="Manchester-Dataflow">[https://web.archive.org/web/20120730230237/http://intranetcnc.cs.manmanchester.ac.uk/cnc/projects/dataflow.html "Manchester Dataflow Research Project", 1995Research Reports: Abstracts, September 1997]</ref> and MIT Tagged Token architecture were major projects in dynamic dataflow.
 
The research, however, never overcame the problems related to:
 
* Efficiently broadcasting data tokens in a massively parallel system.
* Efficiently dispatching instruction tokens in a massively parallel system.
* Building [[Contentcontent-addressable_memoryaddressable | CAMmemory]]s (CAM) large enough to hold all of the dependencies of a real program.
Instructions and their data dependencies proved to be too fine-grained to be effectively distributed in a large network. That is, the time for the instructions and tagged results to travel through a large connection network was longer than the time to actually do themany computations.
 
Nonetheless, [[Outout-of-order execution]] (OOE) has become the dominant computing paradigm since the 1990s. It is a form of restricted dataflow. This paradigm introduced the idea of an ''execution window''. The ''execution window'' follows the sequential order of the von Neumann architecture, however within the window, instructions are allowed to be completed in data dependency order. This is accomplished in CPUs that dynamically tag the data dependencies of the code in the execution window. The logical complexity of dynamically keeping track of the data dependencies, restricts [[out-of-order execution|''OOE'']] [[CPU]]s to a small number of execution units (2-6) and limits the execution window sizes to the range of 32 to 200 instructions, much smaller than envisioned for full dataflow machines.{{cn|date=July 2023}}
 
== Dataflow architecture topics ==
=== Static and Dynamicdynamic dataflow machines ===
Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them.
 
Line 25 ⟶ 28:
 
=== Compiler ===
Normally, in the control flow architecture, [[compiler]]s analyze program [[source code]] for data dependencies between instructions in order to better organize the instruction sequences in the binary output files. The instructions are organized sequentially but the dependency information itself is not recorded in the binaries. Binaries compiled for a dataflow machine contain this dependency information.
 
A dataflow compiler records these dependencies by creating unique tags for each dependency instead of using variable names. By giving each dependency a unique tag, it allows the non-dependent code segments in the binary to be executed ''out of order'' and in parallel. Compiler detects the loops, break statements and various programming control syntax for data flow.
 
=== Programs ===
Programs are loaded into the CAM of a dynamic dataflow computer. When all of the tagged operands of an instruction become available (that is, output from previous instructions and/or user input), the instruction is marked as ready for execution by an [[execution unit]].
 
This is known as ''activating'' or ''firing'' the instruction. Once an instruction is completed by an execution unit, its output data is storedsent (with its tag) into the CAM. Any instructions that are dependent upon this particular datum (identified by its tag value) are then marked as ready for execution. In this way, subsequent instructions are executed in proper order, avoiding [[race condition]]s. This order may differ from the sequential order envisioned by the human programmer, the programmed order.
 
=== Instructions ===
Line 40 ⟶ 43:
 
== See also ==
* [[DataflowParallel computing]]
* [[Parallel Computing]]
* [[SISAL]]
* [[Binary Modular Dataflow Machine]] (BMDFM)
* [[Systolic array]]
* [[Transport triggered architecture]]
* [[Network on a chip]] (NoC)
** [[System on a chip]] (SoC)
* [[In-memory computing]]
 
== References ==
{{reflistReflist}}
{{refimprove|date=August 2012}}
 
{{CPUProcessor technologies}}
 
{{Hardware acceleration}}
[[Category:Hardware acceleration]]
[[Category:Classes of computers]]
[[Category:Computer architecture]]