[go: nahoru, domu]

Jump to content

Data farming

From Wikipedia, the free encyclopedia

Data farming is the process of using designed computational experiments to “grow” data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. These methods can be applied to any computational model.

Data farming differs from Data mining, as the following metaphors indicate:

Miners seek valuable nuggets of ore buried in the earth, but have no control over what is out there or how hard it is to extract the nuggets from their surroundings. ... Similarly, data miners seek to uncover valuable nuggets of information buried within massive amounts of data. Data-mining techniques use statistical and graphical measures to try to identify interesting correlations or clusters in the data set.

Farmers cultivate the land to maximize their yield. They manipulate the environment to their advantage using irrigation, pest control, crop rotation, fertilizer, and more. Small-scale designed experiments let them determine whether these treatments are effective. Similarly, data farmers manipulate simulation models to their advantage, using large-scale designed experimentation to grow data from their models in a manner that easily lets them extract useful information. ...the results can reveal root cause-and-effect relationships between the model input factors and the model responses, in addition to rich graphical and statistical views of these relationships.[1]

A NATO modeling and simulation task group has documented the data farming process in the Final Report of MSG-088.[2] Here, data farming uses collaborative processes in combining rapid scenario prototyping, simulation modeling, design of experiments, high performance computing, and analysis and visualization in an iterative loop-of-loops.[3]

History

[edit]

The science of Design of Experiments (DOE) has been around for over a century, pioneered by R.A. Fisher for agricultural studies. Many of the classic experiment designs can be used in simulation studies. However, computational experiments have far fewer restrictions than do real-world experiments, in terms of costs, number of factors, time required, ability to replicate, ability to automate, etc. Consequently, a framework specifically oriented toward large-scale simulation experiments is warranted.

People have been conducting computational experiments for as long as computers have been around. The term “data farming” is more recent, coined in 1998[4] in conjunction with the Marine Corp's Project Albert,[5] in which small agent-based distillation models (a type of stochastic simulation) were created to capture specific military challenges. These models were run thousands or millions of times at the Maui High Performance Computer Center[6] and other facilities. Project Albert analysts would work with the military subject matter experts to refine the models and interpret the results.

Initially, the use of brute-force full factorial (gridded) designs meant that the simulations needed to run very quickly and the studies required high-performance computing. Even so, only a small number of factors (at a limited number of levels) could be investigated, due to the curse of dimensionality.

The SEED Center for Data Farming[7] at the Naval Postgraduate School[8] also worked closely with Project Albert in model generation, output analysis, and the creation of new experimental designs to better leverage the computing capabilities at Maui and other facilities. Recent breakthroughs in designs specifically developed for data farming can be found in[9][10] among others.

Workshops

[edit]

A series of international data farming workshops have been held since 1998 by the SEED Center for Data Farming.[11] International Data Farming Workshop 1 occurred in 1991, and since then 16 more workshops have taken place. The workshops have seen a diverse array of representation from participating countries, such as Canada, Singapore, Mexico, Turkey, and the United States.[12]

The International Data Farming Workshops operate through collaboration between various teams of experts. The most recent workshop held in 2008 saw over 100 teams participating. The teams of data farmers are assigned a specific area of study, such as robotics, homeland security, and disaster relief. Different forms of data farming are experimented with and utilized by each group, such as the Pythagoras ABM, the Logistics Battle Command model, and the agent-based sensor effector model (ABSEM).[12]

References

[edit]
  1. ^ Lucas, T. W.; Kelton, W. D.; Sanchez, P. J.; Sanchez, S. M.; Anderson, B. L. (2015). "Changing the Paradigm: Simulation, Now a Method of First Resort". Naval Research Logistics. 62 (4): 293–305. doi:10.1002/nav.21628. hdl:10945/57859. S2CID 60846350.
  2. ^ https://www.cso.nato.int/Pubs/rdp.asp?RDP=STO-TR-MSG-088 [bare URL]
  3. ^ Archived 2015-08-29 at the Wayback Machine
  4. ^ Brandstein, A.; Horne, G. (1998). "Data Farming: A Meta-Technique for Research in the 21st Century". Maneuver Warfare Science. Quantico, VA: Marine Corps Combat Development Command.
  5. ^ http://projectalbert.org [bare URL]
  6. ^ https://www.mhpcc.hpc.mil/ [bare URL]
  7. ^ http://harvest.nps.edu [bare URL]
  8. ^ http://www.nps.edu/ [bare URL]
  9. ^ Kleijnen, J. P. C.; Sanchez, S. M.; Lucas, T. W.; Cioppa, T. M. (2005). "A User's Guide to the Brave New World of Designing Simulation Experiments". INFORMS Journal on Computing. 17 (3): 263–289. doi:10.1287/ijoc.1050.0136.
  10. ^ Sanchez, S. M.; Sanchez, P.; Wan, H. (2021). "Work Smarter, Not Harder: A Tutorial on Designing and ConductingSimulation Experiments" (PDF). 2021 Winter Simulation Conference (WSC). Piscataway, NJ: Institute of Electrical and Electronics Engineers, Inc. pp. 1–15. doi:10.1109/WSC52266.2021.9715422. hdl:10945/44883. ISBN 9780903440660. S2CID 247059747.
  11. ^ http://harvest.nps.edu [bare URL]
  12. ^ a b Horne, G., & Schwierz, K. (2008). Data farming around the world overview. Paper presented at the 1442-1447. doi:10.1109/WSC.2008.4736222
[edit]