One question: you say that "Suggested mapping for node(a physical or virtual machine) and worker is, 1 Node = 1 Worker process". This is a useful option when the system that the Spark Job runs from uses internal By default, if this option is not specified then it will try to use the local hostname and resolve its IP address. This topic describes how to configure spark-submit parameters in E-MapReduce. What about troubleshooting? Before running though the spark-submit, you would run the kinit Kerberos command to generate a ticket if not using a keytab, or if a keytab is used then you can either run the kinit command with the flags needed to use a keytab for ticket generation or within your Spark application code you specify to login from keytab.So, let’s go and see how all those options match to Spark submit.The purpose of the YARN Application Master instance is to do the negotiation of the resources from the Resource Manager and then communicate with the Node Managers to monitor the utilization of resources and execute containers. The number of partitions can only be specified statically on a job level by specifying the The high-level APIs can automatically convert join operations into broadcast joins. Suggested mapping for a node(a physical or virtual machine) and a worker is, 1 Node = 1 Worker process 2. An example is provided within the Spark documentation in this page (The last option is the Advanced Properties. Does every worker instance hold an executor for the specific application (which manages storage, task) or one worker node holds one executor? It is important to distinguish these two as they work very differently in Spark.The high-level APIs are much more efficient when it comes to data serialization as they are aware of the actual data types they are working with. This is the “Spark Mode” option.In this option, you can specify if your Spark Master in on YARN or if you are going to be using a Standalone Spark. It is, in fact, literally impossible for it to do that as each transformation is defined by an opaque function and Spark has no way to see what data we’re working with and how.There is another rule of thumb that can be derived from this: use rich transformations, i.e. Namely For example, for HDFS I/O the number of cores per executor is thought to peak in performance at about five.We can also tweak Spark’s configuration relating to locality when reading data from the cluster using the Explicit application-wide allocation of executors can have its downsides. By default, tasks are processed in a FIFO manner (on the job level), but this can be changed by using an alternative The number two problem that most Spark jobs suffer from, is inadequate partitioning of data. Spark applications can be written in Scala, Java, or Python. The purpose of this option is when your Spark Application is running, the Spark driver starts a Web UI that can be used to monitor your running Spark job and inspect the execution of the job. It is important to realize that the RDD API doesn’t apply any such optimizations.Let’s take a look at these two definitions of the same computation:The second definition is much faster than the first because it handles data more efficiently in the context of our use case by not collecting all the elements needlessly.We can observe a similar performance issue when making cartesian joins and later filtering on the resulting data instead of converting to a pair RDD and using an inner join:The rule of thumb here is to always work with the minimal amount of data at transformation boundaries.

Spark therefore computes what’s called a Variables in closures are pretty simple to keep track of.