Skip to content

Livy could intermittently returns batch job status as SUCCEED even Spark on Kubernetes actually fails #455

Open
@nitishtw

Description

I run a Livy server on Kubernetes to submit Spark batch jobs via Airflow using the Livy REST API. However, even when Spark jobs fail due to driver or executor issues, Livy incorrectly shows the status as "SUCCEED" and returns a successful response. This leads Airflow to mark failed jobs as successful mistakenly.

One related issue I found was this - https://issues.apache.org/jira/browse/LIVY-896 which got fixed in the next release 0.8.0
We upgraded our Livy cluster from 0.7.0 to 0.8.0 but it didn't fix the issue in the Kubernetes ecosystem.

While reviewing the Livy code, we found that it returns an exit code of 0 (success) even when the driver pod fails in cluster mode. Ideally, the main pod should transition to an 'Error' state if the application fails.

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions