Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support running Pandas UDFs on GPUs in Python processes. #640

Merged
merged 72 commits into from
Sep 11, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
4063857
Support Pandas UDF on GPU
firestarman Jul 21, 2020
94f3b22
Fix an error when running rapids.worker.
firestarman Jul 21, 2020
96e35aa
Pack python files
firestarman Jul 22, 2020
dccf977
Add API to init GPU context in python process
firestarman Jul 27, 2020
ef36d5e
Support limiting the number of python workers
firestarman Jul 31, 2020
93966b4
Support memory limitaion for Python processes
firestarman Aug 4, 2020
3b2e527
Imporve the memory computation for Python workers
firestarman Aug 5, 2020
1e4895a
Support setting max size of RMM pool
firestarman Aug 6, 2020
c877506
Support more types of Pandas UDF
firestarman Aug 11, 2020
d0d15c2
Use maxsize for max pool size when not specified.
firestarman Aug 18, 2020
61b3589
Support two more types of Pandas UDF
firestarman Aug 27, 2020
65e497d
Add tests for udfs and basic support for accelerated arrow exchange with
revans2 Aug 12, 2020
ec2cec6
Support Pandas UDF on GPU
firestarman Jul 21, 2020
d18ff4e
Fix an error when running rapids.worker.
firestarman Jul 21, 2020
616d1da
Pack python files
firestarman Jul 22, 2020
c5557ca
Add API to init GPU context in python process
firestarman Jul 27, 2020
2fcc2aa
Support limiting the number of python workers
firestarman Jul 31, 2020
31a9fb3
Support memory limitaion for Python processes
firestarman Aug 4, 2020
f71f4de
Imporve the memory computation for Python workers
firestarman Aug 5, 2020
d4791af
Support setting max size of RMM pool
firestarman Aug 6, 2020
4b88939
Support more types of Pandas UDF
firestarman Aug 11, 2020
d5d156e
Use maxsize for max pool size when not specified.
firestarman Aug 18, 2020
37dc856
Support two more types of Pandas UDF
firestarman Aug 27, 2020
29e39ea
Use the columnar version rule for Scalar Pandas UDF
firestarman Sep 1, 2020
1033f23
Updates the RapidsMeta of plans for Pandas UDF
firestarman Sep 2, 2020
5e28772
Remove the unnecessary env variable
firestarman Sep 2, 2020
3f94ac8
Correct some doc styles to pass mvn verification
firestarman Sep 3, 2020
6f24ca2
add udf test
shotai Sep 3, 2020
5ea3ab6
Merge branch 'pandas-udf' of https://github.com/firestarman/spark-rap…
shotai Sep 3, 2020
69d5b54
Support process pool for Python workers
firestarman Sep 3, 2020
9b25c2e
merge udftest
shotai Sep 3, 2020
c8ab681
add cudf test
shotai Sep 3, 2020
b81fe94
Add a config to disable/enable Pandas UDF on GPU.
firestarman Sep 4, 2020
908bb93
add more test case with cudf
shotai Sep 4, 2020
963c821
refactor udf test
shotai Sep 4, 2020
868ca3a
Python: Not init GPU if no cuda device specified
firestarman Sep 4, 2020
92c10a6
resolve conflict
shotai Sep 4, 2020
e05e4b6
resolve conflict
shotai Sep 4, 2020
69b1ec0
Update the config doc
firestarman Sep 7, 2020
1f56cd9
skip udf in premerge
shotai Sep 7, 2020
e400f5a
add pyarrow in docker
shotai Sep 7, 2020
6435eaa
disable udf test in premerge
shotai Sep 7, 2020
c33b41c
Merge pull request #4 from firestarman/pandas-test-mg
firestarman Sep 8, 2020
53959db
Merge branch 'branch-0.2' into pandas-udf-col
firestarman Sep 8, 2020
732505b
Move gpu init to `try...catch`
firestarman Sep 8, 2020
5a074ac
Remove numpy, it will include in pandas installation. Update readme.
shotai Sep 8, 2020
b80a201
update doc with pandas udf support
shotai Sep 8, 2020
7514ac1
update integration dockerfile
shotai Sep 8, 2020
db06504
Merge branch 'pandas-udf' of https://github.com/firestarman/spark-rap…
shotai Sep 8, 2020
7817e5e
Update getting-started-on-prem.md
shotai Sep 8, 2020
0f72025
Update getting-started-on-prem.md
shotai Sep 8, 2020
35a3008
Update getting-started-on-prem.md
shotai Sep 8, 2020
ec7848e
Update getting-started-on-prem.md
shotai Sep 8, 2020
be65f58
Add warning log when python worker reuse enabled
firestarman Sep 8, 2020
1757afa
Replace GpuSemaphore with PythonWorkerSemaphore
firestarman Sep 8, 2020
537e594
Remove the warning log for python worker reuse enabled
firestarman Sep 9, 2020
60cf951
remove udf marker, add comment, update jenkins script for udf_cudf test
shotai Sep 9, 2020
6c9e86b
update doc in pandas udf section
shotai Sep 9, 2020
58fab52
update dockerfile for integration test
shotai Sep 9, 2020
283b6a2
Merge branch 'pandas-udf' of https://github.com/firestarman/spark-rap…
shotai Sep 9, 2020
eee4d05
Update the name of conf for python gpu enabled.
firestarman Sep 10, 2020
8924ad8
add marker for cudf udf test
shotai Sep 10, 2020
7eba830
update comment in test start script
shotai Sep 10, 2020
f838ae0
remove old config
shotai Sep 10, 2020
803fcf4
Not init gpu memory when python on gpu is disabled
firestarman Sep 10, 2020
984082b
remove old config
shotai Sep 10, 2020
127ab08
Merge branch 'pandas-udf' of https://github.com/firestarman/spark-rap…
shotai Sep 10, 2020
6156298
import cudf lib normally
shotai Sep 10, 2020
beabf8b
update import cudf
shotai Sep 10, 2020
47ffc98
Check python module conf only when python gpu enabeld
firestarman Sep 10, 2020
b1c9be5
update dynamic config for udf enable
shotai Sep 10, 2020
9860ee6
Merge branch 'pandas-udf' of https://github.com/firestarman/spark-rap…
shotai Sep 10, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update the config doc
And make the uvm config of Python internal.
  • Loading branch information
firestarman committed Sep 7, 2020
commit 69b1ec004a5b658097c00938fd6855a9b472c1b5
9 changes: 4 additions & 5 deletions docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,10 @@ Name | Description | Default Value
<a name="memory.pinnedPool.size"></a>spark.rapids.memory.pinnedPool.size|The size of the pinned memory pool in bytes unless otherwise specified. Use 0 to disable the pool.|0
<a name="memory.uvm.enabled"></a>spark.rapids.memory.uvm.enabled|UVM or universal memory can allow main host memory to act essentially as swap for device(GPU) memory. This allows the GPU to process more data than fits in memory, but can result in slower processing. This is an experimental feature.|false
<a name="python.concurrentPythonWorkers"></a>spark.rapids.python.concurrentPythonWorkers|Set the number of Python worker processes that can execute concurrently per GPU. Python worker processes may temporarily block when the number of concurrent Python worker processes started by the same executor exceeds this amount. Allowing too many concurrent tasks on the same GPU may lead to GPU out of memory errors. >0 means enabled, while <=0 means unlimited|0
<a name="python.gpu.enabled"></a>spark.rapids.python.gpu.enabled|Enable (true) or disable (false) the support of running Python Pandas UDFs on the GPU. When enabled, Pandas UDFs can call cuDF APIs for acceleration. This is an experimental feature.|false
<a name="python.memory.gpu.allocFraction"></a>spark.rapids.python.memory.gpu.allocFraction|The fraction of total GPU memory that should be initially allocated for pooled memory for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.allocFraction)), since the executor will share the GPU with its owning Python workers.|None
<a name="python.memory.gpu.maxAllocFraction"></a>spark.rapids.python.memory.gpu.maxAllocFraction|The fraction of total GPU memory that limits the maximum size of the RMM pool for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.maxAllocFraction)), since the executor will share the GPU with its owning Python workers. when setting to 0 means no limit.|0.0
<a name="python.memory.gpu.pooling.enabled"></a>spark.rapids.python.memory.gpu.pooling.enabled|Should RMM in Python workers act as a pooling allocator for GPU memory, or should it just pass through to CUDA memory allocation directly.|None
<a name="python.memory.uvm.enabled"></a>spark.rapids.python.memory.uvm.enabled|Similar with `spark.rapids.python.memory.uvm.enabled`, but this conf is for python workers. This is an experimental feature.|None
<a name="python.gpu.enabled"></a>spark.rapids.python.gpu.enabled|This is an experimental feature and is likely to change in the future. Enable (true) or disable (false) support for scheduling Python Pandas UDFs with GPU resources. When enabled, pandas UDFs are assumed to share the same GPU that the RAPIDs accelerator uses and will honor the python GPU configs|false
<a name="python.memory.gpu.allocFraction"></a>spark.rapids.python.memory.gpu.allocFraction|The fraction of total GPU memory that should be initially allocated for pooled memory for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.allocFraction)), since the executor will share the GPU with its owning Python workers. Half of the rest will be used if not specified|None
<a name="python.memory.gpu.maxAllocFraction"></a>spark.rapids.python.memory.gpu.maxAllocFraction|The fraction of total GPU memory that limits the maximum size of the RMM pool for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.maxAllocFraction)), since the executor will share the GPU with its owning Python workers. when setting to 0 it means no limit.|0.0
<a name="python.memory.gpu.pooling.enabled"></a>spark.rapids.python.memory.gpu.pooling.enabled|Should RMM in Python workers act as a pooling allocator for GPU memory, or should it just pass through to CUDA memory allocation directly. When not specified, It will honor the value of config 'spark.rapids.memory.gpu.pooling.enabled'|None
<a name="shuffle.transport.enabled"></a>spark.rapids.shuffle.transport.enabled|When set to true, enable the Rapids Shuffle Transport for accelerated shuffle.|false
<a name="shuffle.transport.maxReceiveInflightBytes"></a>spark.rapids.shuffle.transport.maxReceiveInflightBytes|Maximum aggregate amount of bytes that be fetched at any given time from peers during shuffle|1073741824
<a name="shuffle.ucx.managementServerHost"></a>spark.rapids.shuffle.ucx.managementServerHost|The host to be used to start the management server|null
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,16 @@

package com.nvidia.spark.rapids.python

import com.nvidia.spark.rapids.RapidsConf.{POOLED_MEM, UVM_ENABLED}
import com.nvidia.spark.rapids.RapidsConf.conf

object PythonConfEntries {

val PYTHON_GPU_ENABLED = conf("spark.rapids.python.gpu.enabled")
.doc("Enable (true) or disable (false) the support of running Python Pandas UDFs" +
" on the GPU. When enabled, Pandas UDFs can call cuDF APIs for acceleration." +
" This is an experimental feature.")
.doc("This is an experimental feature and is likely to change in the future." +
" Enable (true) or disable (false) support for scheduling Python Pandas UDFs with" +
" GPU resources. When enabled, pandas UDFs are assumed to share the same GPU that" +
" the RAPIDs accelerator uses and will honor the python GPU configs")
.booleanConf
.createWithDefault(false)

Expand All @@ -40,7 +42,7 @@ object PythonConfEntries {
.doc("The fraction of total GPU memory that should be initially allocated " +
"for pooled memory for all the Python workers. It supposes to be less than " +
"(1 - $(spark.rapids.memory.gpu.allocFraction)), since the executor will share the " +
"GPU with its owning Python workers.")
"GPU with its owning Python workers. Half of the rest will be used if not specified")
.doubleConf
.checkValue(v => v >= 0 && v <= 1, "The fraction value for Python workers must be in [0, 1].")
.createOptional
Expand All @@ -49,21 +51,24 @@ object PythonConfEntries {
.doc("The fraction of total GPU memory that limits the maximum size of the RMM pool " +
"for all the Python workers. It supposes to be less than " +
"(1 - $(spark.rapids.memory.gpu.maxAllocFraction)), since the executor will share the " +
"GPU with its owning Python workers. when setting to 0 means no limit.")
"GPU with its owning Python workers. when setting to 0 it means no limit.")
.doubleConf
.checkValue(v => v >= 0 && v <= 1, "The value of maxAllocFraction for Python workers must be" +
" in [0, 1].")
.createWithDefault(0.0)

val PYTHON_POOLED_MEM = conf("spark.rapids.python.memory.gpu.pooling.enabled")
.doc("Should RMM in Python workers act as a pooling allocator for GPU memory, or" +
" should it just pass through to CUDA memory allocation directly.")
" should it just pass through to CUDA memory allocation directly. When not specified," +
s" It will honor the value of config '${POOLED_MEM.key}'")
.booleanConf
.createOptional

val PYTHON_UVM_ENABLED = conf("spark.rapids.python.memory.uvm.enabled")
.doc("Similar with `spark.rapids.python.memory.uvm.enabled`, but this conf is for " +
"python workers. This is an experimental feature.")
.doc(s"Similar with '${UVM_ENABLED.key}', but this conf is for" +
s" python workers. When not specified, it will honor the value of config" +
s" '${UVM_ENABLED.key}'. This is an experimental feature.")
.internal()
.booleanConf
.createOptional
revans2 marked this conversation as resolved.
Show resolved Hide resolved

Expand Down