You can manage your projects with your way.
config.yml
file.project_name
: Name of your project (required)project_id
: Unique ID for your project (required)pipelines
: List of pipelines in your projectapps
: List of intelligent apps in your project (optional)endpoints
: List of endpoints in your project (optional)alias
: Short name for your pipelinepath
: Location of your pipeline filecompute
: Computing instance type. Options are:
XSMALL
: 2 vCPU, 8 GB RAMSMALL
: 4 vCPU, 16 GB RAMMEDIUM
: 8 vCPU, 32 GB RAMLARGE
: 16 vCPU, 64 GB RAMXLARGE
: 32 vCPU, 128 GB RAM (Enterprise only)spark_config.deploy_mode
: How Spark runs. Default is local
. Options are:
client
: Spark runs in the same process as the drivercluster
: Spark runs in a separate processspark_config.executor_instances
: Number of executors to use in PySpark. If spark_config.deploy_mode
is local
, this field is ignored.python_dependencies
: List of Python dependencies for your pipelinepath
: Location of your app filepath
: Location of your endpoint configuration filename
: Name of the Python packageversion
: Version of the Python package (optional)index_url
: URL of the Python package index (optional)