作业Jobs

作业是指立即或按计划运行笔记本或 JAR 的一种方法。A job is a way of running a notebook or JAR either immediately or on a scheduled basis. 运行笔记本的另一种方法是在 笔记本 UI中以交互方式运行。The other way to run a notebook is interactively in the notebook UI.

你可以使用 UI、CLI 和调用作业 API 来创建和运行作业。You can create and run jobs using the UI, the CLI, and by invoking the Jobs API. 你可以通过使用 CLI、查询 API 以及通过电子邮件警报来监视 UI 中的作业运行结果。You can monitor job run results in the UI, using the CLI, by querying the API, and through email alerts. 本文重点介绍如何使用 UI 执行作业任务。This article focuses on performing job tasks using the UI. 有关其他方法,请参阅 作业 CLI作业 APIFor the other methods, see Jobs CLI and Jobs API.

重要

  • 作业数限制为 1000。The number of jobs is limited to 1000.
  • 工作区可在一小时内创建的作业数限制为 5000 (包括 "立即运行" 和 "运行提交" ) 。The number of jobs a workspace can create in an hour is limited to 5000 (includes “run now” and “runs submit”). 此限制还会影响 REST API 和笔记本工作流创建的作业。This limit also affects jobs created by the REST API and notebook workflows.
  • 工作区限制为 150 个并发(正在运行的)作业运行。A workspace is limited to 150 concurrent (running) job runs.
  • 工作区限制为 1000 个活动(正在运行和挂起的)作业运行。A workspace is limited to 1000 active (running and pending) job runs.

查看作业View jobs

单击 "作业" 图标Click the Jobs icon 作业菜单图标 在边栏中。in the sidebar. 将显示 "作业" 列表。The Jobs list displays. "作业" 页将列出所有定义的作业、群集定义、计划(如果有)以及上次运行的结果。The Jobs page lists all defined jobs, the cluster definition, the schedule if any, and the result of the last run.

在 "作业" 列表中,可以筛选作业:In the Jobs list, you can filter jobs:

  • 使用关键字。Using key words.
  • 仅选择您拥有的作业或您有权访问的作业。Selecting only jobs you own or jobs you have access to. 此筛选器的访问权限取决于启用的 作业访问控制Access to this filter depends on Jobs access control being enabled.

您还可以单击任何列标题以按该列的降序或升序) 对作业列表进行排序 (。You can also click any column header to sort the list of jobs (either descending or ascending) by that column. 默认情况下,该页面按作业名称按升序排序。By default, the page is sorted on job names in ascending order.

作业列表Jobs list

创建作业 Create a job

  1. 单击 " + 创建作业"。Click + Create Job. 将显示 "作业详细信息" 页。The job detail page displays.

    作业详细信息Job detail

  2. 在带有占位符文本的文本字段中输入名称 UntitledEnter a name in the text field with the placeholder text Untitled.

  3. 指定任务类型:单击 " 选择笔记本"、" 设置 JAR" 或 " 配置 spark-提交"。Specify the task type: click Select Notebook, Set JAR, or Configure spark-submit.

    • 笔记本Notebook

      1. 选择笔记本,并单击 "确定"Select a notebook and click OK.
      2. 单击 " 参数" 旁边的 " 编辑"。Next to Parameters, click Edit. 指定键/值对或表示键值对的 JSON 字符串。Specify key-value pairs or a JSON string representing key-value pairs. 此类参数设置 小组件的值。Such parameters set the value of widgets.
    • Jar:上传 JAR,指定主类和参数,然后单击 "确定"JAR: Upload a JAR, specify the main class and arguments, and click OK. 若要了解有关 JAR 作业的详细信息,请参阅 jar 作业提示To learn more about JAR jobs, see JAR job tips.

    • spark-提交:指定主类、库 JAR 的路径和参数,然后单击 " 确认"。spark-submit: Specify the main class, path to the library JAR, arguments, and click Confirm. 若要了解有关 spark 提交的详细信息,请参阅 Apache Spark 文档To learn more about spark-submit, see the Apache Spark documentation.

      备注

      以下 Azure Databricks 功能不可用于 spark 提交作业:The following Azure Databricks features are not available for spark-submit jobs:

  4. 在 "从属库" 字段中,可以选择单击 " 添加 " 并指定相关库。In the Dependent Libraries field, optionally click Add and specify dependent libraries. 依赖库将在启动时自动附加到群集。Dependent libraries are automatically attached to the cluster on launch. 请按照 库依赖项 中的建议指定依赖项。Follow the recommendations in Library dependencies for specifying dependencies.

    重要

    如果已将库配置为 自动安装在所有群集上 ,或在下一步中选择了已安装库的现有已终止群集,则作业执行不会等待库安装完成。If you have configured a library to automatically install on all clusters or in the next step you select an existing terminated cluster that has libraries installed, the job execution does not wait for library installation to complete. 如果作业需要某个库,则应将该库附加到 "依赖库" 字段中的作业。If a job requires a certain library, you should attach the library to the job in the Dependent Libraries field.

  5. 在群集字段中,单击 " 编辑 " 并指定要在其上运行作业的群集。In the Cluster field, click Edit and specify the cluster on which to run the job. 在 "群集类型" 下拉菜单中,选择 " 新建作业群集 " 或 " 现有所有用途群集"。In the Cluster Type drop-down, choose New Job Cluster or Existing All-Purpose Cluster.

    备注

    选择群集类型时,请注意以下事项:Keep the following in mind when you choose a cluster type:

    • 对于要完成的重要作业或作业,建议选择 " 新建作业群集"。For production-level jobs or jobs that are important to complete, we recommend that you select New Job Cluster.
    • 只能在新群集上运行 spark-提交作业。You can run spark-submit jobs only on new clusters.
    • 当你在新群集上运行作业时,该作业将被视为受作业工作负荷定价的数据工程 (作业) 工作负荷。When you run a job on a new cluster, the job is treated as a data engineering (job) workload subject to the job workload pricing. 当你在现有群集上运行作业时,该作业将被视为一个数据分析 (所有用途) 工作负荷受限于所有用途的工作负荷定价。When you run a job on an existing cluster, the job is treated as a data analytics (all-purpose) workload subject to all-purpose workload pricing.
    • 如果选择终止的现有群集,并且作业所有者可以重新启动权限,则在计划运行该作业时,Azure Databricks 会启动该群集。If you select a terminated existing cluster and the job owner has Can Restart permission, Azure Databricks starts the cluster when the job is scheduled to run.
    • 现有群集最适用于定期更新 仪表板 等任务。Existing clusters work best for tasks such as updating dashboards at regular intervals.
    • 新作业群集 -完成 群集配置New Job Cluster - complete the cluster configuration.
      1. 在群集配置中,选择 "运行时版本"。In the cluster configuration, select a runtime version. 有关选择运行时版本的帮助,请参阅 Databricks RuntimeDatabricks LightFor help with selecting a runtime version, see Databricks Runtime and Databricks Light.
      2. 若要减少新的群集开始时间,请选择群集配置中的 To decrease new cluster start time, select a pool in the cluster configuration.
    • 现有的全部用途群集 -在下拉中,选择现有群集。Existing All-Purpose Cluster - in the drop-down, select the existing cluster.
  6. 在计划字段中,可以选择单击 " 编辑 " 并计划作业。In the Schedule field, optionally click Edit and schedule the job. 请参阅 运行作业See Run a job.

  7. (可选)单击 " 高级 " 并指定高级作业选项。Optionally click Advanced and specify advanced job options. 请参阅 高级作业选项See Advanced job options.

查看作业详细信息 View job details

在 "作业" 页上,单击 "名称" 列中的作业名称。On the Jobs page, click a job name in the Name column. "作业详细信息" 页显示配置参数、活动运行 (正在运行和挂起的) 并已完成的运行。The job details page shows configuration parameters, active runs (running and pending), and completed runs.

作业详细信息Job details

Databricks 维护作业运行的历史记录最多60天。Databricks maintains a history of your job runs for up to 60 days. 如果需要保留作业运行,我们建议你在作业运行结果过期之前将其导出。If you need to preserve job runs, we recommend that you export job run results before they expire. 有关详细信息,请参阅 导出作业运行结果For more information, see Export job run results.

在 "作业运行" 页上,通过单击 "Spark" 列中的 " 日志 " 链接,可以查看作业运行的标准错误、标准输出、log4j 输出。In the job runs page, you can view the standard error, standard output, log4j output for a job run by clicking the Logs link in the Spark column.

运行作业Run a job

可以按计划运行作业,也可以立即运行。You can run a job on a schedule or immediately.

计划作业Schedule a job

定义作业的计划:To define a schedule for the job:

  1. 单击 "计划" 旁边的 "编辑"。Click Edit next to Schedule.

    编辑计划Edit schedule

    将显示 "计划作业" 对话框。The Schedule Job dialog displays.

    计划作业Schedule job

  2. 指定计划粒度、开始时间和时区。Specify the schedule granularity, starting time, and time zone. 还可以选择 " 显示 Cron 语法 " 复选框,以在 Quartz Cron 语法中显示和编辑计划。Optionally select the Show Cron Syntax checkbox to display and edit the schedule in Quartz Cron Syntax.

    备注

    • Azure Databricks 强制执行在作业计划触发的后续运行之间的最小间隔为10秒,而不考虑 cron 表达式中的配置。Azure Databricks enforces a minimum interval of 10 seconds between subsequent runs triggered by the schedule of a job regardless of the seconds configuration in the cron expression.
    • 可以选择观察夏令时或 UTC 时间的时区。You can choose a time zone that observes daylight saving time or a UTC time. 如果选择一个采用夏令时的区域,则 在夏令时开始或结束时,将跳过每小时作业,或者可能看上去不会激发一小时或两次。If you select a zone that observes daylight saving time, an hourly job will be skipped or may appear to not fire for an hour or two when daylight saving time begins or ends. 如果希望作业每小时运行一次 (绝对时间) ,请选择 UTC 时间。If you want jobs to run at every hour (absolute time), choose a UTC time.
    • 作业计划程序与 Spark 批处理接口类似,不适用于低延迟作业。The job scheduler, like the Spark batch interface, is not intended for low latency jobs. 由于网络或云问题,作业运行有时可能会延迟几分钟。Due to network or cloud issues, job runs may occasionally be delayed up to several minutes. 在这些情况下,计划作业将在服务可用性之后立即运行。In these situations, scheduled jobs will run immediately upon service availability.
  3. 单击“确认”。Click Confirm.

    计划的作业Job scheduled

暂停和恢复作业计划Pause and resume a job schedule

若要暂停作业,请单击作业计划旁边的 " 暂停 " 按钮:To pause a job, click the Pause button next to the job schedule:

计划的作业Job scheduled

若要恢复暂停的作业计划,请单击 " 继续 " 按钮:To resume a paused job schedule, click the Resume button:

恢复作业Resume job

立即运行作业Run a job immediately

若要立即运行作业,请在 " 活动运行 " 表中,单击 " 立即运行"。To run the job immediately, in the Active runs table click Run Now.

立即运行Run now

提示

单击 " 立即运行 ",以便在完成作业的配置后运行笔记本或 JAR。Click Run Now to do a test run of your notebook or JAR when you’ve finished configuring your job. 如果笔记本出现故障,你可以对其进行编辑,作业将自动运行新版本的笔记本。If your notebook fails, you can edit it and the job will automatically run the new version of the notebook.

使用不同参数 运行作业 Run a job with different parameters

你可以使用 具有不同参数的 "立即运行 " 来重新运行为现有参数指定不同参数或不同值的作业。You can use Run Now with Different Parameters to re-run a job specifying different parameters or different values for existing parameters.

  1. 在 " 活动运行 " 表中,单击 " 立即运行",并提供不同的参数In the Active runs table, click Run Now with Different Parameters. 此对话框因你正在运行的是笔记本作业还是 spark 提交作业而有所不同。The dialog varies depending on whether you are running a notebook job or a spark-submit job.

    • 笔记本 -一种 UI,可用于设置键值对或显示 JSON 对象。Notebook - A UI that lets you set key-value pairs or a JSON object displays. 您可以使用此对话框设置 小组件的值:You can use this dialog to set the values of widgets:

      运行带参数的笔记本Run notebook with parameters

    • spark-提交 -包含参数列表的对话框显示。spark-submit - A dialog containing the list of parameters displays. 例如,你可以运行使用 100 创建作业 (而不是默认的10个分区)中所述的 SparkPi 估计器:For example, you could run the SparkPi estimator described in Create a job with 100 instead of the default 10 partitions:

      设置 spark-提交参数Set spark-submit parameters

  2. 指定参数。Specify the parameters. 提供的参数与触发的运行的默认参数合并。The provided parameters are merged with the default parameters for the triggered run. 如果删除密钥,将使用默认参数。If you delete keys, the default parameters are used.

  3. 单击 “运行”Click Run.

笔记本作业提示 Notebook job tips

笔记本单元输出总数 (所有笔记本单元的合并输出) 受20MB 大小限制。Total notebook cell output (the combined output of all notebook cells) is subject to a 20MB size limit. 此外,单个单元格的输出受8MB 大小限制。Additionally, individual cell output is subject to an 8MB size limit. 如果总单元输出大小超过20MB,或者单个单元格的输出大于8MB,则该运行将被取消并标记为失败。If total cell output exceeds 20MB in size, or if the output of an individual cell is larger than 8MB, the run will be canceled and marked as failed. 如果需要帮助来查找超出限制的单元格,请针对所有用途的群集运行该笔记本,并使用此 笔记本自动保存技术If you need help finding cells that are near or beyond the limit, run the notebook against an all-purpose cluster and use this notebook autosave technique.

JAR 作业提示 JAR job tips

在运行 JAR 作业时,需要注意一些注意事项。There are some caveats you need to be aware of when you run a JAR job.

输出大小限制 Output size limits

备注

在 Databricks Runtime 6.3 及更高版本中可用。Available in Databricks Runtime 6.3 and above.

作业输出(如发送到 stdout 的日志输出)受到20MB 的大小限制。Job output, such as log output emitted to stdout, is subject to a 20MB size limit. 如果输出的总大小较大,则该运行将被取消并标记为失败。If the total output has a larger size, the run will be canceled and marked as failed.

若要避免出现此限制,可以通过将 Spark 配置设置为,防止 stdout 从驱动程序返回到 Azure Databricks spark.databricks.driver.disableScalaOutput trueTo avoid encountering this limit, you can prevent stdout from being returned from the driver to Azure Databricks by setting the spark.databricks.driver.disableScalaOutput Spark configuration to true. 默认情况下,标志值为 falseBy default the flag value is false. 该标志控制 Scala JAR 作业和 Scala 笔记本的单元输出。The flag controls cell output for Scala JAR jobs and Scala notebooks. 如果启用了标志,Spark 不会将作业执行结果返回给客户端。If the flag is enabled, Spark does not return job execution results to the client. 该标志不会影响在群集的日志文件中写入的数据。The flag does not affect the data that is written in the cluster’s log files. 建议将此标志设置为仅用于 JAR 作业的作业群集,因为它将禁用笔记本结果。Setting this flag is recommended only for job clusters for JAR jobs, because it will disable notebook results.

使用共享的 SparkContext Use the shared SparkContext

因为 Databricks 是托管服务,所以可能需要进行一些代码更改,以确保 Apache Spark 作业正常运行。Because Databricks is a managed service, some code changes may be necessary to ensure that your Apache Spark jobs run correctly. JAR 作业程序必须使用共享 SparkContext API 来获取 SparkContextJAR job programs must use the shared SparkContext API to get the SparkContext. 由于 Databricks 将初始化 SparkContext ,因此调用的程序 new SparkContext() 将失败。Because Databricks initializes the SparkContext, programs that invoke new SparkContext() will fail. 若要获取 SparkContext ,请仅使用 SparkContext 由 Databricks 创建的共享:To get the SparkContext, use only the shared SparkContext created by Databricks:

val goodSparkContext = SparkContext.getOrCreate()
val goodSparkSession = SparkSession.builder().getOrCreate()

此外,在使用共享时,还应避免使用几种方法 SparkContextIn addition, there are several methods you should avoid when using the shared SparkContext.

  • 不要调用 SparkContext.stop()Do not call SparkContext.stop().
  • 不要 System.exit(0) sc.stop() 在程序的末尾调用或 MainDo not call System.exit(0) or sc.stop() at the end of your Main program. 这可能会导致未定义的行为。This can cause undefined behavior.

try-finally 块用于清理作业Use try-finally blocks for job clean up

假设有一个由两部分组成的 JAR:Consider a JAR that consists of two parts:

  • jobBody() 它包含作业的主要部分jobBody() which contains the main part of the job
  • jobCleanup() 它必须在之后执行,而不 jobBody() 考虑该函数是成功还是返回了异常jobCleanup() which has to be executed after jobBody(), irrespective of whether that function succeded or returned an exception

例如,可以 jobBody() 创建表,并可使用 jobCleanup() 删除这些表。As an example, jobBody() may create tables, and you can use jobCleanup() to drop these tables.

确保调用清理方法的安全方法是将 try-finally 块置于代码中:The safe way to ensure that the clean up method is called is to put a try-finally block in the code:

try {
  jobBody()
} finally {
  jobCleanup()
}

_不_应尝试使用或进行清理 sys.addShutdownHook(jobCleanup)You should should not try to clean up using sys.addShutdownHook(jobCleanup) or

val cleanupThread = new Thread { override def run = jobCleanup() }
Runtime.getRuntime.addShutdownHook(cleanupThread)

由于 Spark 容器生存期在 Azure Databricks 中的管理方式,因此不能可靠地运行关闭挂钩。Due to the way the lifetime of Spark containers is managed in Azure Databricks, the shutdown hooks are not run reliably.

配置 JAR 作业参数Configure JAR job parameters

JAR 作业使用字符串数组进行参数化。JAR jobs are parameterized with an array of strings.

  • 在 UI 中,你可以在 "参数" 文本框中输入参数,这些参数通过应用 POSIX shell 分析规则拆分为数组。In the UI, you input the parameters in the Arguments text box which are split into an array by applying POSIX shell parsing rules. 有关详细信息,请参阅 shlex 文档For more information, reference the shlex documentation.
  • 在 API 中,将参数输入为标准 JSON 数组。In the API, you input the parameters as a standard JSON array. 有关详细信息,请参阅 SparkJarTaskFor more information, reference SparkJarTask. 若要访问这些参数,请检查 String 传递到函数的数组 mainTo access these parameters, inspect the String array passed into your main function.

查看作业运行 详细信息 View job run details

作业运行详细信息页包含作业输出和日志链接:A job run details page contains job output and links to logs:

作业运行详细信息Job run details

你可以从 "作业" 页和 "群集" 页查看作业运行详细信息。You can view job run details from the Jobs page and the Clusters page.

  • 单击 "作业" 图标 "  作业" 菜单图标 Click the Jobs icon Jobs Menu Icon. 在 "已完成的过去60天" 表的 "运行" 列中,单击 "运行编号" 链接。In the Run column of the Completed in past 60 days table, click the run number link.

    作业运行的作业Job run from Jobs

  • 单击 "群集" 图标  群集图标 Click the Clusters icon Clusters Icon. 在 " 作业群集 " 表的作业行中,单击 " 作业运行 " 链接。In a job row in the Job Clusters table, click the Job Run link.

    作业从群集运行Job run from Clusters

导出作业运行结果 Export job run results

您可以为所有作业类型导出笔记本运行结果和作业运行日志。You can export notebook run results and job run logs for all job types.

导出笔记本运行结果Export notebook run results

可以通过导出作业运行的结果来持久保存作业。You can persist job runs by exporting their results. 对于笔记本作业运行,可以 导出 呈现的笔记本,稍后可将其 导入 到 Databricks 工作区。For notebook job runs, you can export a rendered notebook which can be later be imported into your Databricks workspace.

  1. 在 "作业详细信息" 页上,单击 "运行" 列中的作业运行名称。In the job detail page, click a job run name in the Run column.

    作业运行Job run

  2. 单击 " 导出到 HTML"。Click Export to HTML.

    导出运行结果Export run result

导出作业运行日志Export job run logs

你还可以导出作业运行的日志。You can also export the logs for your job run. 若要自动执行此过程,可以设置作业,使其通过作业 API 自动将日志传递到 DBFS。To automate this process, you can set up your job so that it automatically delivers logs to DBFS through the Job API. 有关详细信息,请参阅作业创建API 调用中的NewClusterClusterLogConf字段。For more information, see the NewCluster and ClusterLogConf fields in the Job Create API call.

编辑作业Edit a job

若要编辑作业,请单击 "作业" 列表中的 "作业名称" 链接。To edit a job, click the job name link in the Jobs list.

删除作业Delete a job

若要删除作业,请单击 "作业" 列表中 "操作" 列中的 " x "。To delete a job, click the x in the Action column in the Jobs list.

库依赖项Library dependencies

Spark 驱动程序具有无法重写的某些库依赖项。The Spark driver has certain library dependencies that cannot be overridden. 这些库优先于你自己的与它们冲突的任何库。These libraries take priority over any of your own libraries that conflict with them.

若要获取驱动程序库依赖项的完整列表,请在附加到同一 Spark 版本的群集的笔记本中运行以下命令 (或包含要检查) 的驱动程序的群集。To get the full list of the driver library dependencies, run the following command inside a notebook attached to a cluster of the same Spark version (or the cluster with the driver you want to examine).

%sh
ls /databricks/jars

管理库依赖项Manage library dependencies

在处理库依赖项时处理库依赖项的合理规则是将 Spark 和 Hadoop 作为 provided 依赖项列出。A good rule of thumb when dealing with library dependencies while creating JARs for jobs is to list Spark and Hadoop as provided dependencies. 在 Maven 上,将 Spark 和/或 Hadoop 添加为提供的依赖项,如以下示例中所示。On Maven, add Spark and/or Hadoop as provided dependencies as shown in the following example.

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>1.2.1</version>
  <scope>provided</scope>
</dependency>

在中 sbt ,将 Spark 和 Hadoop 作为提供的依赖项添加,如下面的示例中所示。In sbt, add Spark and Hadoop as provided dependencies as shown in the following example.

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0" % "provided"
libraryDependencies += "org.apache.hadoop" %% "hadoop-core" % "1.2.1" % "provided"

提示

基于所运行的版本为依赖项指定正确的 Scala 版本。Specify the correct Scala version for your dependencies based on the version you are running.

高级作业选项Advanced job options

最大并发运行Maximum concurrent runs

可并行运行的最大运行数。The maximum number of runs that can be run in parallel. 启动新的运行时,如果作业已达到其最大活动运行数,Azure Databricks 将跳过该运行。On starting a new run, Azure Databricks skips the run if the job has already reached its maximum number of active runs. 如果希望能够同时执行同一作业的多个运行,请将此值设置为高于默认值1。Set this value higher than the default of 1 if you want to be able to perform multiple runs of the same job concurrently. 这在以下情况下很有用:如果你频繁地触发作业,并且想要允许连续运行彼此重叠,或者要触发不同于其输入参数的多个运行。This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs that differ by their input parameters.

警报Alerts

作业失败、成功或超时情况下发送的电子邮件警报。Email alerts sent in case of job failure, success, or timeout. 你可以为作业开始、作业成功和作业失败设置警报 (包括) 的已跳过作业,并为每种警报类型提供多个逗号分隔的电子邮件地址。You can set alerts up for job start, job success, and job failure (including skipped jobs), providing multiple comma-separated email addresses for each alert type. 你还可以选择不在已跳过作业运行的警报。You can also opt out of alerts for skipped job runs.

配置电子邮件警报Configure email alerts

将这些电子邮件警报与你最喜欢的通知工具集成,其中包括:Integrate these email alerts with your favorite notification tools, including:

超时Timeout

作业的最大完成时间。The maximum completion time for a job. 如果作业未在此时间内完成,则 Databricks 会将其状态设置为 "超时"。If the job does not complete in this time, Databricks sets its status to “Timed Out”.

重试Retries

用于确定失败的运行的时间和次数的策略会重试。Policy that determines when and how many times failed runs are retried.

重试策略Retry policy

备注

如果同时配置了 超时重试,超时值适用于每次重试。If you configure both Timeout and Retries, the timeout applies to each retry.

控制对作业 的访问 Control access to jobs

作业访问控制允许作业所有者和管理员授予对其作业的精细权限。Job access control enable job owners and administrators to grant fine grained permissions on their jobs. 使用作业访问控制,作业所有者可以选择哪些其他用户或组可以查看作业的结果。With job access controls, job owners can choose which other users or groups can view results of the job. 所有者还可以选择谁可以管理其作业 (即,调用 "立即运行" 和 "取消"。 ) Owners can also choose who can manage runs of their job (that is, invoke Run Now and Cancel.)

有关详细信息,请参阅 作业访问控制See Jobs access control for details.