How Does Spark Running On Yarn Account For Python Memory Usage?
Solution 1:
I'd try to increase memory to spark.python.worker.memory
default (512m) because of heavy Python code and this property value does not count in spark.executor.memory
.
Amount of memory to use per python worker process during aggregation, in the same format as JVM memory strings (e.g. 512m, 2g). If the memory used during aggregation goes above this amount, it will spill the data into disks. link
ExecutorMemoryOverhead calculation in Spark:
MEMORY_OVERHEAD_FRACTION = 0.10
MEMORY_OVERHEAD_MINIMUM = 384
val executorMemoryOverhead =
max(MEMORY_OVERHEAD_FRACTION * ${spark.executor.memory}, MEMORY_OVERHEAD_MINIMUM))
The property is spark.{yarn|mesos}.executor.memoryOverhead
for YARN and Mesos.
YARN kills the processes which are taking more memory than they requested which is sum ofexecutorMemoryOverhead
and executorMemory
.
In given image python processes in worker uses
spark.python.worker.memory
, thenspark.yarn.executor.memoryOverhead
+spark.executor.memory
is specific JVM.
Additional resource Apache mailing thread
Post a Comment for "How Does Spark Running On Yarn Account For Python Memory Usage?"