Alternative method of submitting jobs to DF Runner#756
Alternative method of submitting jobs to DF Runner#756remylouisew wants to merge 2 commits intoapple:mainfrom
Conversation
… to use 'axlearn gcp vm start'
markblee
left a comment
There was a problem hiding this comment.
Thanks @remylouisew -- IIUC, the main issue is with flag escaping. Aside from finding a more generic fix for flag parsing, maybe we can add support for loading from flagfiles, which would avoid the duplicate code and manual renaming step. (It seems that flag escaping is painful for users anyway.) WDYT?
|
@markblee The flag escaping has indeed been painful! I am not familiar with the process of loading from flagfiles, could you elaborate on how it would be implemented? |
|
|
Closing this PR due to inactivity. Please re-open or file a new PR if this is still important. |
Running a dataflow job via the axlearn gcp vm start command is not necessary or intuative for non-apple users. Additionally, if you are already running your commands from a VM (e.g from a remote desktop), this process does not work. I recognize that there are still scenarios where you would want to launch your dataflow jobs from a VM, so rather than replacing that ability, I am adding an alternative.
In order to submit jobs to the Dataflow runner without using ‘axlearn gcp vm start’, changes to the quoting behavior of dataflow.py are necessary. Unfortunately, there’s not an obviously elegant way to provide two versions of dataflow.py, so if you can think of a better option, please let me know.
What I’ve done is this: dataflow.py remains as it was, and I’m adding dataflow.alt.py. In the directions, I’ve added instructions to replace the original module if the user wants to submit jobs to the dataflow runner without needing to create a VM.
Additional note: PR #711 makes a change to the quoting behavior that allows the submission of dataflow jobs without ‘axlearn gcp vm start’, however this fix will not work for any commands that include parameters that require quotes, e.g --dataflow_service_options='worker_accelerator=type:nvidia-tesla-t4;count:1;install-nvidia-driver'
The dataflow.alt.py module DOES work these kinds of parameters.