Spark is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an interpreter. It provides clean, language-integrated APIs in Scala, Java, and Python, with a rich array of parallel operators. Spark can run on the Apache Mesos cluster manager, Hadoop YARN, Amazon EC2, or without an independent resource manager (“standalone mode”).
via Delicious http://www.spark-project.org/docs/latest/index.html
via Delicious http://www.spark-project.org/docs/latest/index.html
Geen opmerkingen:
Een reactie posten