Slot CPU Core, it is often associated with a CPU core. Each physical core of a node in the Spark cluster can execute one Spark task at a time. Therefore, the number of available slots on a node is usually equal to the number of CPU cores allocated to Spark on that node.
Dynamic Partition Pruning (DPP) in Apache Spark is an optimization technique that improves the performance of join operations involving partitioned tables. It allows Spark to dynamically reduce the amount of data read during the query execution by filtering partitions at runtime based on the join conditions. This is especially useful in scenarios involving large datasets where a query might otherwise scan a lot of unnecessary partitions.