Documentation | Build Status |
---|---|
At the core of Dagger.jl is a scheduler heavily inspired by Dask. It can run computations represented as directed-acyclic-graphs (DAGs) efficiently on many Julia worker processes and threads, as well as GPUs via DaggerGPU.jl.
The DTable has been moved out of this repository. You can now find it here.
Dagger.jl can be installed using the Julia package manager. Enter the Pkg REPL mode by typing "]" in the Julia REPL and then run:
pkg> add Dagger
Or, equivalently, install Dagger via the Pkg API:
julia> import Pkg; Pkg.add("Dagger")
Once installed, the Dagger
package can be loaded with using Dagger
, or if
you want to use Dagger for distributed computing, it can be loaded as:
using Distributed; addprocs() # Add one Julia worker per CPU core
using Dagger
You can run the following example to see how Dagger exposes easy parallelism:
# This runs first:
a = Dagger.@spawn rand(100, 100)
# These run in parallel:
b = Dagger.@spawn sum(a)
c = Dagger.@spawn prod(a)
# Finally, this runs:
wait(Dagger.@spawn println("b: ", b, ", c: ", c))
Dagger can support a variety of use cases that benefit from easy, automatic parallelism, such as:
This isn't an exhaustive list of the use cases that Dagger supports. There are more examples in the docs, and more use cases examples are welcome (just file an issue or PR).
Contributions are encouraged.
There are several ways to contribute to our project:
Reporting Bugs: If you find a bug, please open an issue and describe the problem. Make sure to include steps to reproduce the issue and any error messages you receive regarding that issue.
Fixing Bugs: If you'd like to fix a bug, please create a pull request with your changes. Make sure to include a description of the problem and how your changes will address it.
Additional examples and documentation improvements are also very welcome.
List of recommended Dagger.jl resources:
For help and discussion, we suggest asking in the following places:
Julia Discourse and on the Julia Slack in the #distributed
channel.
We thank DARPA, Intel, and the NIH for supporting this work at MIT.