About 39,300 results
Open links in new tab
  1. Apache Spark™ - Unified Engine for large-scale data analytics

    Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.

  2. Downloads - Apache Spark

    Spark docker images are available from Dockerhub under the accounts of both The Apache Software Foundation and Official Images. Note that, these images contain non-ASF software and may be …

  3. PySpark Overview — PySpark 4.0.1 documentation - Apache Spark

    Spark Connect is a client-server architecture within Apache Spark that enables remote connectivity to Spark clusters from any application. PySpark provides the client for the Spark Connect server, …

  4. Spark SQL, Built-in Functions

    Jul 30, 2009 · There is a SQL config 'spark.sql.parser.escapedStringLiterals' that can be used to fallback to the Spark 1.6 behavior regarding string literal parsing. For example, if the config is enabled, the …

  5. Performance Tuning - Spark 4.0.1 Documentation

    Apache Spark’s ability to choose the best execution plan among many possible options is determined in part by its estimates of how many rows will be output by every node in the execution plan (read, filter, …

  6. Documentation | Apache Spark

    Apache Spark™ Documentation Setup instructions, programming guides, and other documentation are available for each stable version of Spark below: Spark

  7. Running Spark on Kubernetes - Spark 4.0.1 Documentation

    Spark executors must be able to connect to the Spark driver over a hostname and a port that is routable from the Spark executors. The specific network configuration that will be required for Spark to work in …

  8. pyspark.sql.DataFrame.sample — PySpark 4.0.1 documentation

    pyspark.sql.DataFrame.sample # DataFrame.sample(withReplacement=None, fraction=None, seed=None) [source] # Returns a sampled subset of this DataFrame. New in version 1.3.0. Changed …

  9. MLlib | Apache Spark

    Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud, against diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on …

  10. Overview - Spark 4.0.1 Documentation

    If you’d like to build Spark from source, visit Building Spark. Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS), and it should run on any platform that runs a supported version of Java.