чем отличаются scripted и declarative pipeline в jenkins
If you read this blog post, there is a high chance you’re looking for information about practical differences between scripted and declarative pipeline, correct? You couldn’t find a better place then. I’m going to show you the four most practical differences between those two. Stay with me for a few minutes and enjoy the ride!
The introduction
If you ask me this question and expect an answer different from «it depends,» I would say use the declarative pipeline. And here’s why.
1. Pipeline code validation at startup
Let’s consider the following pipeline code.
If we try to execute the following pipeline, the validation will quickly fail the build. The echo step can be triggered only with the String parameter, so we get the error like.
Now let’s take a look at the scripted pipeline equivalent of that example.
This pipeline executes the same stages, and the same steps. There is one significant difference however. Let’s execute it and see what result it produces.
It failed as expected. But this time the Build stage was executed, as well as the first step from the Test stage. As you can see, there was no validation of the pipeline code. The declarative pipeline in this case handles such use case much better.
2. Restart from stage
Another cool feature that only declarative pipeline has is «Restart from stage» one. Let’s fix the pipeline from the previous example and see if we can restart the Test stage only.
Jenkins scripted pipeline or declarative pipeline
Now, I’m not sure in which situation each of these two types would be a best match. So will declarative be the future of the Jenkins pipeline?
Anyone who can share some thoughts about these two syntax types.
7 Answers 7
When Jenkins Pipeline was first created, Groovy was selected as the foundation. Jenkins has long shipped with an embedded Groovy engine to provide advanced scripting capabilities for admins and users alike. Additionally, the implementors of Jenkins Pipeline found Groovy to be a solid foundation upon which to build what is now referred to as the «Scripted Pipeline» DSL.
As it is a fully featured programming environment, Scripted Pipeline offers a tremendous amount of flexibility and extensibility to Jenkins users. The Groovy learning-curve isn’t typically desirable for all members of a given team, so Declarative Pipeline was created to offer a simpler and more opinionated syntax for authoring Jenkins Pipeline.
The two are both fundamentally the same Pipeline sub-system underneath. They are both durable implementations of «Pipeline as code.» They are both able to use steps built into Pipeline or provided by plugins. Both are able to utilize Shared Libraries
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Сценарий Jenkins со скриптом или декларативный конвейер
Теперь я не уверен, в какой ситуации каждый из этих двух типов будет лучшим совпадением. scripted синтаксис скоро будет устаревшим? Так будет ли declarative будущее трубопровода Дженкинс?
Любой, кто может поделиться некоторыми мыслями об этих двух типах синтаксиса.
6 ответов
Когда был создан Jenkins Pipeline, Groovy был выбран в качестве основы. Jenkins уже давно поставляется со встроенным Groovy движком, чтобы предоставить расширенные возможности сценариев для администраторов и пользователей. Кроме того, разработчики Jenkins Pipeline сочли Groovy прочной основой для построения того, что сейчас называется DSL «Scripted Pipeline».
Поскольку это полнофункциональная среда программирования, Scripted Pipeline предлагает пользователям Jenkins невероятную гибкость и расширяемость. Кривая обучения Groovy, как правило, не желательна для всех членов данной группы, поэтому декларативный конвейер был создан для того, чтобы предложить более простой и продуманный синтаксис для разработки Jenkins Pipeline.
Обе они по сути являются одной и той же подсистемой Pipeline. Они оба являются надежными реализациями «Конвейер как код». Они оба могут использовать шаги, встроенные в Pipeline или предоставляемые плагинами. Оба могут использовать общие библиотеки
Однако они отличаются синтаксисом и гибкостью. Декларативные ограничения, доступные пользователю с более строгой и заранее определенной структурой, делают его идеальным выбором для более простых конвейеров непрерывной доставки. Scripted предоставляет очень мало ограничений, поскольку единственные ограничения по структуре и синтаксису, как правило, определяются самим Groovy, а не какими-либо специфичными для Pipeline системами, что делает его идеальным выбором для опытных пользователей и тех, у кого более сложные требования. Как следует из названия, декларативный конвейер поддерживает модель декларативного программирования. Принимая во внимание, что Скриптовые конвейеры следуют более императивной модели программирования.
Документация Jenkins правильно объясняет и сравнивает оба типа.
Цитирую: «Скриптовый конвейер предлагает пользователям Jenkins огромную гибкость и расширяемость. Кривая обучения Groovy обычно не желательна для всех членов данной команды, поэтому декларативный конвейер был создан, чтобы предлагать более простой и упорядоченный синтаксис для Авторский Дженкинс Трубопровод.
Обе они по сути являются одной и той же подсистемой Pipeline. «
Декларативный, как представляется, является более перспективным вариантом, который люди рекомендуют. это единственный, который может поддерживать Visual Pipeline Editor. это поддерживает проверку. и в конечном итоге он обладает большей силой сценариев, поскольку в большинстве случаев вы можете использовать сценарии. Иногда кто-то приходит с вариантом использования, в котором он не может сделать то, что он хочет сделать с декларативным, но обычно это люди, которые используют скрипты в течение некоторого времени, и эти пробелы в функциях, вероятно, со временем сократятся.
Еще одна вещь, которую следует учитывать, это то, что декларативные конвейеры имеют script () step. Это может запустить любой сценарий конвейера. Поэтому я рекомендую использовать декларативные конвейеры и, если необходимо, использовать script() для скриптовых конвейеров. Поэтому вы получаете лучшее из обоих миров.
Недавно я переключился на декларативное с сценариев агента kubernetes. Вплоть до 18 июля декларативные конвейеры не имели полной возможности определять капсулы kubernetes. Однако с добавлением шага yamlFile вы можете теперь читать свой шаблон pod из файла yaml в своем репо.
Это позволяет вам использовать, например, Отличный плагин vscode kubernetes для проверки вашего шаблона pod, затем прочитайте его в свой Jenkinsfile и используйте контейнеры пошагово, как вам угодно.
Как уже упоминалось выше, вы можете добавить блоки скриптов. Пример шаблона pod с пользовательскими jnlp и docker.
декларативный конвейер намного превосходит Скриптовый конвейер. Декларативный конвейер может выполнить все, что может использовать сценарий конвейера, используя скрипт-шаг и имеет множество дополнительных функций.
Кроме того, декларативный конвейер поддерживает различные технологии, такие как Docker или Kubernetes (см. здесь ).
Декларативный конвейер также более перспективен. Он все еще находится в разработке и предлагает новые функции, такие как недавно представленная Matrix Функция была добавлена совсем недавно, в конце 2019 года.
Конвейер со сценариями Jenkins или декларативный конвейер
Я не уверен, в какой ситуации каждый из этих двух типов лучше всего подходит. Так будет declarative ли будущее трубопровода Дженкинса?
Любой, кто может поделиться мыслями об этих двух типах синтаксиса.
При первом создании Jenkins Pipeline в качестве основы был выбран Groovy. Jenkins уже давно поставляет со встроенным движком Groovy, чтобы предоставить расширенные возможности создания сценариев как для администраторов, так и для пользователей. Вдобавок разработчики Jenkins Pipeline обнаружили, что Groovy является прочной основой для построения того, что теперь называется DSL «Scripted Pipeline».
Поскольку это полнофункциональная среда программирования, Scripted Pipeline предлагает пользователям Jenkins огромную гибкость и расширяемость. Кривая обучения Groovy обычно не желательна для всех членов данной команды, поэтому был создан Declarative Pipeline, предлагающий более простой и упорядоченный синтаксис для разработки Jenkins Pipeline.
По сути, обе эти подсистемы представляют собой одну и ту же подсистему трубопроводов. Оба они являются надежными реализациями «конвейера как кода». Оба они могут использовать шаги, встроенные в Pipeline или предоставляемые плагинами. Оба могут использовать общие библиотеки.
Однако они отличаются синтаксисом и гибкостью. Декларативная ограничивает то, что доступно пользователю, с более строгой и заранее определенной структурой, что делает его идеальным выбором для более простых конвейеров непрерывной доставки. Scripted предоставляет очень мало ограничений, поскольку единственные ограничения по структуре и синтаксису, как правило, определяются самим Groovy, а не какими-либо специфичными для конвейера системами, что делает его идеальным выбором для опытных пользователей и тех, у кого более сложные требования. Как следует из названия, Declarative Pipeline поощряет модель декларативного программирования. В то время как сценарии конвейеров следуют более императивной модели программирования.
Pipeline Syntax
This section builds on the information introduced in Getting started with Pipeline and should be treated solely as a reference. For more information on how to use Pipeline syntax in practical examples, refer to the Using a Jenkinsfile section of this chapter. As of version 2.5 of the Pipeline plugin, Pipeline supports two discrete syntaxes which are detailed below. For the pros and cons of each, see the Syntax Comparison.
As discussed at the start of this chapter, the most fundamental part of a Pipeline is the «step». Basically, steps tell Jenkins what to do and serve as the basic building block for both Declarative and Scripted Pipeline syntax.
For an overview of available steps, please refer to the Pipeline Steps reference which contains a comprehensive list of steps built into Pipeline as well as steps provided by plugins.
Declarative Pipeline
Declarative Pipeline is a relatively recent addition to Jenkins Pipeline [1] which presents a more simplified and opinionated syntax on top of the Pipeline sub-systems.
All valid Declarative Pipelines must be enclosed within a pipeline block, for example:
The basic statements and expressions which are valid in Declarative Pipeline follow the same rules as Groovy’s syntax with the following exceptions:
The top-level of the Pipeline must be a block, specifically: pipeline < >.
No semicolons as statement separators. Each statement has to be on its own line.
Blocks must only consist of Sections, Directives, Steps, or assignment statements.
You can use the Declarative Directive Generator to help you get started with configuring the directives and sections in your Declarative Pipeline.
Limitations
There is currently an open issue which limits the maximum size of the code within the pipeline<> block. This limitation does not apply to Scripted pipelines.
Sections
Sections in Declarative Pipeline typically contain one or more Directives or Steps.
agent
The agent section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment depending on where the agent section is placed. The section must be defined at the top-level inside the pipeline block, but stage-level usage is optional.
In the top-level pipeline block and each stage block.
Differences between top and stage level Agents
There are some nuances when adding an agent to the top level or a stage level, and this when the options directive is applied.
Top Level Agents
In agents declared at the outermost level of the Pipeline, the options are invoked after entering the agent. As an example, when using timeout it will be only applied to the execution within the agent.
Stage Agents
This timeout will include the agent provisioning time. Because the timeout includes the agent provisioning time, the Pipeline may fail in cases where agent allocation is delayed.
Parameters
In order to support the wide variety of use-cases Pipeline authors may have, the agent section supports a few different types of parameters. These parameters can be applied at the top-level of the pipeline block, or within each stage directive.
Execute the Pipeline, or stage, on any available agent. For example: agent any
When applied at the top-level of the pipeline block no global agent will be allocated for the entire Pipeline run and each stage section will need to contain its own agent section. For example: agent none
Execute the Pipeline, or stage, on an agent available in the Jenkins environment with the provided label. For example: agent
Label conditions can also be used. For example: agent < label 'my-label1 && my-label2' >or agent
agent < node < label 'labelName' >> behaves the same as agent < label 'labelName' >, but node allows for additional options (such as customWorkspace ).
Execute the Pipeline, or stage, with the given container which will be dynamically provisioned on a node pre-configured to accept Docker-based Pipelines, or on a node matching the optionally defined label parameter. docker also optionally accepts an args parameter which may contain arguments to pass directly to a docker run invocation, and an alwaysPull option, which will force a docker pull even if the image name is already present. For example: agent < docker 'maven:3.8.1-adoptopenjdk-11' >or
docker also optionally accepts a registryUrl and registryCredentialsId parameters which will help to specify the Docker Registry to use and its credentials. The parameter registryCredentialsId could be used alone for private repositories within the docker hub. For example:
dockerfile also optionally accepts a registryUrl and registryCredentialsId parameters which will help to specify the Docker Registry to use and its credentials. For example:
Execute the Pipeline, or stage, inside a pod deployed on a Kubernetes cluster. In order to use this option, the Jenkinsfile must be loaded from either a Multibranch Pipeline or a Pipeline from SCM. The Pod template is defined inside the kubernetes < >block. For example, if you want a pod with a Kaniko container inside it, you would define it as follows:
You will need to create a secret aws-secret for Kaniko to be able to authenticate with ECR. This secret should contain the contents of
Common Options
These are a few options that can be applied to two or more agent implementations. They are not required unless explicitly stated.
A string. Run the Pipeline or individual stage this agent is applied to within this custom workspace, rather than the default. It can be either a relative path, in which case the custom workspace will be under the workspace root on the node, or an absolute path. For example:
A boolean, false by default. If true, run the container on the node specified at the top-level of the Pipeline, in the same workspace, rather than on a new node entirely.
Execute all the steps defined in this Pipeline within a newly created container of the given name and tag ( 3.8.1-adoptopenjdk-11 ). |
Defining agent none at the top-level of the Pipeline ensures that an Executor will not be assigned unnecessarily. Using agent none also forces each stage section to contain its own agent section. |
Execute the steps in this stage in a newly created container using this image. |
Execute the steps in this stage in a newly created container using a different image from the previous stage. |
In the top-level pipeline block and each stage block.
Conditions
Run the steps in the post section regardless of the completion status of the Pipeline’s or stage’s run.
Only run the steps in post if the current Pipeline’s or stage’s run has a different completion status from its previous run.
Only run the steps in post if the current Pipeline’s or stage’s run is successful and the previous run failed or was unstable.
Only run the steps in post if the current Pipeline’s or stage’s run’s status is failure, unstable, or aborted and the previous run was successful.
Only run the steps in post if the current Pipeline’s or stage’s run has an «aborted» status, usually due to the Pipeline being manually aborted. This is typically denoted by gray in the web UI.
Only run the steps in post if the current Pipeline’s or stage’s run has a «failed» status, typically denoted by red in the web UI.
Only run the steps in post if the current Pipeline’s or stage’s run has a «success» status, typically denoted by blue or green in the web UI.
Only run the steps in post if the current Pipeline’s or stage’s run has an «unstable» status, usually caused by test failures, code violations, etc. This is typically denoted by yellow in the web UI.
Only run the steps in post if the current Pipeline’s or stage’s run has not a «success» status. This is typically denoted in the web UI depending on the status previously mentioned.
Run the steps in this post condition after every other post condition has been evaluated, regardless of the Pipeline or stage’s status.
Conventionally, the post section should be placed at the end of the Pipeline. |
Post-condition blocks contain steps the same as the steps section. |
stages
Containing a sequence of one or more stage directives, the stages section is where the bulk of the «work» described by a Pipeline will be located. At a minimum, it is recommended that stages contain at least one stage directive for each discrete part of the continuous delivery process, such as Build, Test, and Deploy.
Only once, inside the pipeline block.
steps
The steps section defines a series of one or more steps to be executed in a given stage directive.
Inside each stage block.
Directives
environment
The environment directive specifies a sequence of key-value pairs which will be defined as environment variables for all steps, or stage-specific steps, depending on where the environment directive is located within the Pipeline.
This directive supports a special helper method credentials() which can be used to access pre-defined Credentials by their identifier in the Jenkins environment.
Inside the pipeline block, or within stage directives.
Supported Credentials Type
the environment variable specified will be set to the Secret Text content
the environment variable specified will be set to the location of the File file that is temporarily created
Username and password
the environment variable specified will be set to username:password and two additional environment variables will be automatically defined: MYVARNAME_USR and MYVARNAME_PSW respectively.
SSH with Private Key
the environment variable specified will be set to the location of the SSH key file that is temporarily created and two additional environment variables may be automatically defined: MYVARNAME_USR and MYVARNAME_PSW (holding the passphrase).
options
Only once, inside the pipeline block.
Available Options
Persist artifacts and console output for the specific number of recent Pipeline runs. For example: options
Perform the automatic source control checkout in a subdirectory of the workspace. For example: options
Disallow concurrent executions of the Pipeline. Can be useful for preventing simultaneous accesses to shared resources, etc. For example: options
Do not allow the pipeline to resume if the controller restarts. For example: options
Used with docker or dockerfile top-level agent. When specified, each stage will run in a new container instance on the same node, rather than all stages running in the same container instance.
Allows overriding default treatment of branch indexing triggers. If branch indexing triggers are disabled at the multibranch or organization label, options < overrideIndexTriggers(true) >will enable them for this job only. Otherwise, options < overrideIndexTriggers(false) >will disable branch indexing triggers for this job only.
Preserve stashes from completed builds, for use with stage restarting. For example: options < preserveStashes() >to preserve the stashes from the most recent completed build, or options < preserveStashes(buildCount: 5) >to preserve the stashes from the five most recent completed builds.
Set the quiet period, in seconds, for the Pipeline, overriding the global default. For example: options
On failure, retry the entire Pipeline the specified number of times. For example: options
Skip checking out code from source control by default in the agent directive. For example: options
Skip stages once the build status has gone to UNSTABLE. For example: options
Set a timeout period for the Pipeline run, after which Jenkins should abort the Pipeline. For example: options
Specifying a global execution timeout of one hour, after which Jenkins will abort the Pipeline run. |
Prepend all console output generated by the Pipeline run with the time at which the line was emitted. For example: options
Set failfast true for all subsequent parallel stages in the pipeline. For example: options
A comprehensive list of available options is pending the completion of INFRA-1503.
stage options
Available Stage Options
Skip checking out code from source control by default in the agent directive. For example: options
Set a timeout period for this stage, after which Jenkins should abort the stage. For example: options
Specifying an execution timeout of one hour for the Example stage, after which Jenkins will abort the Pipeline run. |
On failure, retry this stage the specified number of times. For example: options
Prepend all console output generated during this stage with the time at which the line was emitted. For example: options
parameters
The parameters directive provides a list of parameters that a user should provide when triggering the Pipeline. The values for these user-specified parameters are made available to Pipeline steps via the params object, see the Parameters, Declarative Pipeline for its specific usage.
Only once, inside the pipeline block.
Available Parameters
A parameter of a string type, for example: parameters
A text parameter, which can contain multiple lines, for example: parameters
A boolean parameter, for example: parameters
A choice parameter, for example: parameters
A password parameter, for example: parameters
A comprehensive list of available parameters is pending the completion of INFRA-1503.
triggers
Only once, inside the pipeline block.
Accepts a cron-style string to define a regular interval at which the Pipeline should be re-triggered, for example: triggers < cron('H */4 * * 1-5') >
Accepts a cron-style string to define a regular interval at which Jenkins should check for new source changes. If new changes exist, the Pipeline will be re-triggered. For example: triggers < pollSCM('H */4 * * 1-5') >
Accepts a comma-separated string of jobs and a threshold. When any job in the string finishes with the minimum threshold, the Pipeline will be re-triggered. For example: triggers
The pollSCM trigger is only available in Jenkins 2.22 or later.
Jenkins cron syntax
The Jenkins cron syntax follows the syntax of the cron utility (with minor differences). Specifically, each line consists of 5 fields separated by TAB or whitespace:
Minutes within the hour (0–59)
The hour of the day (0–23)
The day of the month (1–31)
The day of the week (0–7) where 0 and 7 are Sunday.
To specify multiple values for one field, the following operators are available. In the order of precedence,
* specifies all valid values
M-N specifies a range of values
M-N/X or */X steps by intervals of X through the specified range or whole valid range
A,B,…,Z enumerates multiple values
To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”) should be used wherever possible. For example, using 0 0 * * * for a dozen daily jobs will cause a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but not all at the same time, better using limited resources.
The H symbol can be thought of as a random value over a range, but it actually is a hash of the job name, not a random function, so that the value remains stable for any given project.
Beware that for the day of month field, short cycles such as */3 or H/3 will not work consistently near the end of most months, due to variable month lengths. For example, */3 will run on the 1st, 4th, …31st days of a long month, then again the next day of the next month. Hashes are always chosen in the 1-28 range, so H/3 will produce a gap between runs of between 3 and 6 days at the end of a month. (Longer cycles will also have inconsistent lengths but the effect may be relatively less noticeable.)
Empty lines and lines that start with # will be ignored as comments.
every fifteen minutes (perhaps at :07, :22, :37, :52)
every ten minutes in the first half of every hour (three times, perhaps at :04, :14, :24)
once every two hours at 45 minutes past the hour starting at 9:45 AM and finishing at 3:45 PM every weekday.
once in every two hours slot between 9 AM and 5 PM every weekday (perhaps at 10:38 AM, 12:38 PM, 2:38 PM, 4:38 PM)
once a day on the 1st and 15th of every month except December
stage
The stage directive goes in the stages section and should contain a steps section, an optional agent section, or other stage-specific directives. Practically speaking, all of the real work done by a Pipeline will be wrapped in one or more stage directives.
One mandatory parameter, a string for the name of the stage.
Inside the stages section.
tools
Inside the pipeline block or a stage block.
Supported Tools
The tool name must be pre-configured in Jenkins under Manage Jenkins → Global Tool Configuration. |
input
Configuration options
Optional text for the «ok» button on the input form.
An optional name of an environment variable to set with the submitter name, if present.
An optional list of parameters to prompt the submitter to provide. See parameters for more information.
The when directive allows the Pipeline to determine whether the stage should be executed depending on the given condition. The when directive must contain at least one condition. If the when directive contains more than one condition, all the child conditions must return true for the stage to execute. This is the same as if the child conditions were nested in an allOf condition (see the examples below). If an anyOf condition is used, note that the condition skips remaining tests as soon as the first «true» condition is found.
Inside a stage directive
Built-in Conditions
Execute the stage when the branch being built matches the branch pattern (ANT style path glob) given, for example: when < branch 'master' >. Note that this only works on a multibranch Pipeline.
The optional parameter comparator may be added after an attribute to specify how any patterns are evaluated for a match: EQUALS for a simple string comparison, GLOB (the default) for an ANT style path glob (same as for example changeset ), or REGEXP for regular expression matching. For example: when
Execute the stage when the build is building a tag. Example: when
Execute the stage if the build’s SCM changelog contains a given regular expression pattern, for example: when
Execute the stage if the build’s SCM changeset contains one or more files matching the given pattern. Example: when < changeset "**/*.js" >
The optional parameter comparator may be added after an attribute to specify how any patterns are evaluated for a match: EQUALS for a simple string comparison, GLOB (the default) for an ANT style path glob case insensitive, this can be turned off with the caseSensitive parameter, or REGEXP for regular expression matching. For example: when < changeset pattern: ".TEST\\.java», comparator: «REGEXP» > or when < changeset pattern: "*/*TEST.java», caseSensitive: true >
Executes the stage if the current build is for a «change request» (a.k.a. Pull Request on GitHub and Bitbucket, Merge Request on GitLab, Change in Gerrit, etc.). When no parameters are passed the stage runs on every change request, for example: when < changeRequest() >.
The optional parameter comparator may be added after an attribute to specify how any patterns are evaluated for a match: EQUALS for a simple string comparison (the default), GLOB for an ANT style path glob (same as for example changeset ), or REGEXP for regular expression matching. Example: when
Execute the stage when the specified environment variable is set to the given value, for example: when
Execute the stage when the expected value is equal to the actual value, for example: when
Execute the stage when the specified Groovy expression evaluates to true, for example: when < expression < return params.DEBUG_BUILD >> Note that when returning strings from your expressions they must be converted to booleans or return null to evaluate to false. Simply returning «0» or «false» will still evaluate to «true».
Execute the stage if the TAG_NAME variable matches the given pattern. Example: when < tag "release-*" >. If an empty pattern is provided the stage will execute if the TAG_NAME variable exists (same as buildingTag() ).
The optional parameter comparator may be added after an attribute to specify how any patterns are evaluated for a match: EQUALS for a simple string comparison, GLOB (the default) for an ANT style path glob (same as for example changeset ), or REGEXP for regular expression matching. For example: when
Execute the stage when the nested condition is false. Must contain one condition. For example: when < not < branch 'master' >>
Execute the stage when all of the nested conditions are true. Must contain at least one condition. For example: when < allOf < branch 'master'; environment name: 'DEPLOY_TO', value: 'production' >>
Execute the stage when at least one of the nested conditions is true. Must contain at least one condition. For example: when < anyOf < branch 'master'; branch 'staging' >>
Execute the stage when the current build has been triggered by the param given. For example:
Evaluating when before entering agent in a stage
Evaluating when before the input directive
By default, the when condition for a stage will not be evaluated before the input, if one is defined. However, this can be changed by specifying the beforeInput option within the when block. If beforeInput is set to true, the when condition will be evaluated first, and the input will only be entered if the when condition evaluates to true.