Overview of Workflows in vRealize Orchestrator (vRO)
The magic of vRealize Orchestrator lies in the workflow – a flexible, open-ended, endlessly customizable construct. vRealize Orchestrator Client is essentially a visual scripting tool for the purpose of creating workflows. Workflows are self-documenting because workflows have a visual element to them that, if properly constructed, display exactly and succinctly the order in which the workflow operates, and the primary logic it uses during its operation.
For all of that to work, the implementation has to be solid. vRealize Orchestrator is solid and reliable.
The primary unit of magic in vRealize Orchestrator is the workflow. The workflow is exactly as it sounds – a set of steps derived from a particular problem, coded into a specific order, which can be executed at will, consistently and repeatedly.
Further adding magic to the workflow is the fact that workflows can call other workflows – build upon building blocks – to make more and more complex tasks, while reusing logic already defined and tested. In addition to that, workflows can take one or more parameters (which can even be workflows) and produce one or more outputs. This makes them highly flexible, deriving behavior from a set of inputs and producing output based on those inputs.
Each workflow is like a black box, the inner workings of which can be fully abstracted from the end user. The end use of a workflow might be a user in the vRealize Orchestrator Client, or it may be a program executing on a different system in a different geography. The possibilities are quite literally endless.
A workflow is constructed from various components. The main piece is the workflow body, which contains the steps of the workflow to be executed in order. Each step of the workflow has the ability to take its own inputs and produce its own outputs. The workflow itself can also take inputs and produce outputs, and while it is in the process of execution, it can maintain a set of internal data values, known as attributes.
The above example shows a simplified workflow and its associated structure. The workflow takes a set of four inputs. These inputs are mapped to various workflow item node inputs within the workflow body. The workflow has three local attributes defined. Two of the attributes are used solely as inputs to workflow item nodes. The third attribute is used as an output of a workflow item node (e.g. it is written to by the node), and then subsequently used as an input in a workflow item node. Two of the workflow item nodes produce outputs that are mapped to the overall workflow outputs.
This example is fairly simplistic, but helps to illustrate the components of a workflow and how they are wired together to make a functional process that can read and write data.
The following sections expand on each of the components of a workflow.
Workflows can take inputs, which are defined as simple or complex data or reference types. Each input is defined with a name, a type, and a description. Input values are designated at run-time, being passed into the input either by human intervention in the vRealize Orchestrator Client, or programmatically through some other process. Workflow inputs can be defined by parent workflows who call child workflows, thus a single workflow can wrap several child workflows.
Attributes are similar to inputs, in that they are defined as a set of name, type, description, value entries. But attributes are special in that they exist only within the boundary of the workflow body, and do not persist external to a workflow run. Attributes are generally used to act as local variables that are needed for internal computation purposes within the bounds of the workflow.
An example of this would be computing the hostname to give a virtual machine. An attribute might start as an empty string at the beginning of the workflow run. While the workflow runs, it appends characters to the hostname depending on workflow inputs that were given at run-time. By the end of the workflow, the hostname attribute has grown to become the full name of the server. This attribute is then translated to an output value and assigned before finalizing workflow execution.
Another example is for incrementing loop counters. Suppose you have a loop that you want to run 10 times. You will create a loop counter attribute which starts at 1. Every time the loop runs, an Increase Counter node will run to increase the counter by one. Then a Decision node will check the value of the counter to see if it equals 10. When it does, the execution forks to an alternate path to complete the workflow function. In this case, the attribute is no longer necessary and it has no value. It will not persist when the workflow completes (except for debugging purposes, explained [-Section Reference-]).
The workflow body, or content of the workflow, consists of the set of nodes and their set operation order. The operational order is defined by the linking of each node with its predecessor and successor. The whole workflow starts with a special Start node type, and immediately moves to the first node. From there, execution is defined by the resultant path from each node as execution moves from node to node. Nodes can have success and failure paths, which dictate the direction a workflow moves. There are also special conditional nodes which define the path of the workflow. Thus, a workflow can take several directions while it executes, and may not always terminate at the same spot, especially if an error is encountered during execution.
Each workflow has the ability to produce outputs, although it is not necessary that a workflow produce any outputs at all. A workflow can produce many outputs. Each outputs is similar to inputs and attributes in that they can be of all the same data types and hold all of the same values. The special nature of workflow outputs is that they are produced as the result of a workflow execution, and are made available external to the workflow at the end of the execution. These outputs can then be consumed by the upstream calling workflow if there was one, or by the program that made the API call, or they can simply stand as a result and be unconsumed. When a workflow produces an output, the workflow begins to behave much like a function in a programming language. However, unlike most programming languages, a workflow can return many outputs.
There are several node types available to you while crafting workflows.
Generic node types are the main building blocks for workflows. The overwhelming majority of nodes dragged and placed into workflows will come from this category.
- Decision – the decision node can take an input value (either an input or attribute) and perform a check on the value to determine equality or compare size. When the condition is met, the decision will fork workflow execution in one direction, and when the condition is not met, will fork it in the other. This is the same behavior as a yes/no decision node in a traditional flow chart.
- Custom Decision – a custom decision is similar to a decision, but takes a custom block of scripting to determine the condition. You provide the logic in code, and the workflow execution is forked according to whether your code returns the Boolean value of true or false at the end of the code block.
- Decision Activity – yet another take on the decision node; this time, the logic of the decision is handled by a workflow or action. You provide one that returns a Boolean value and pass it the appropriate input values to make its decision.
- User Interaction – the user interaction node causes the workflow to pause upon execution of the node and wait for a user to interact with the workflow. The user interaction node takes attributes and inputs and can define a custom presentation for the user to see in the vRealize Orchestrator Client. The workflow will not continue execution until a user finishes the interaction.
- Waiting Timer – the waiting timer sets the workflow in a waiting state where no execution happens until the target date/time has been hit, at which point the execution resumes where it left off. This can be useful for multi-day workflows that need to wait on external processes that can take time to complete and aren’t instantaneous.
- Waiting Event – waiting event waits on the receipt of a specified input trigger to continue execution. This is useful for synchronization with an external system, where some portion of the workflow can go asynchronously to the external system, but then requires a synchronization point, which comes in the form of a trigger.
- End Workflow – the end workflow node causes the execution of the workflow to stop and all outputs are passed back with their current values. End workflow nodes can be forked off decision or exception nodes, and a single workflow may have numerous exit points.
- Throw Exception – throw exception raises an exception in the workflow, which halts execution and ends with an error being thrown. If this workflow is nested inside one or more workflows, the exception is raised up through the workflows until one handles it and continues or until it reaches the top most workflow and ends all execution in an error.
- Workflow Note – the workflow note node is a special node which doesn’t connect to any other node. It is a note field which contains information, notes, or subtitle about the workflow. The workflow note node exists at the background of the canvas and thus other workflow nodes are drawn on top of the workflow note, making it useful as a sort of virtual container for grouping related workflows and actions.
- Workflow Element – the workflow element node executes the specified workflow synchronously within the current workflow.
- Foreach Element – the foreach element node executes a workflow once foreach item in an array specified as an input. The array is iterated from 0 to how many nodes, and each item can be used as inputs natively in the foreach element. This is a good shorthand way to execute a loop, although an error in execution of one iteration of the loop will alter the whole thing, which may not be the desired behavior. In the case that you want to continue execution on the other iterations of the loop, it is necessary to design a custom loop that can handle the errors graciously while continuing on to the next iteration.
- Asynchronous Workflow – the asynchronous workflow node executes a workflow asynchronously. That is, the specified workflow will be kicked off and control immediately continues back to the main workflow. The result of this node is a Workflow Token, which can be used to track execution of the workflow later. Errors raised during the execution of an asynchronous workflow will not be trapped in the parent workflow, and thus have to be tracked using the resulting Workflow Token object.
- Schedule Workflow – schedules the specified workflow to run at a provided date/time in the future
- Nested Workflows – causes the specified workflows to be run as nested workflows. Can reference multiple workflows, both locally present ones and remote ones present on another vRealize Orchestrator server.
- Handle Error – enables you to implement error handling in the workflow on the error path of a node. In the event that node throws an error, the handle error node will enable you to choose to throw the exception, call a workflow, or execute a scriptable task.
- Default Error Handler – enables you to choose to throw an exception when an error is encountered or continue execution as if nothing happened (i.e. ignore the error).
- Switch – the switch node behaves much like a switch control statement in programming contexts – it can switch on a specific input variable and control flow based on various value matches (known as cases), including a default path when no match is made (case default).
- Sleep – causes the workflow to sleep for a specified number of milliseconds
- Change Credential – changes the credential of the user running the workflow
- Wait Until Date – wait until date is similar to waiting timer, except that if the server fails during a wait until date node execution, the workflow will fail; whereas with the waiting timer node, the wait will resume upon server restart.
- Wait For Custom Event – wait for custom event is similar to waiting event, except that if the server fails during a wait for custom event node execution, the workflow will fail; whereas with the waiting event node, the wait will resume upon server restart.
- Send Custom Event – sends a custom event that can be trapped with waiting timer and wait for custom event. Use for signaling between workflows and setting up synchronization between them.
- Increase Counter – increases a counter attribute by one. Useful in loops.
- Decrease Counter – decreases a counter attribute by one. Useful in loops.
- System Log – writes an info log entry to the system log writer.
- System Warning – writes a warning log entry to the system log writer.
- System Error – writes an error log entry to the system log writer.
- Server Log – writes an info log entry to the server log writer.
- Server Warning – writes a warning log entry to the server log writer.
- Server Error – writes an error log entry to the server log writer.
- System+Server Log – writes an info log entry to both the system and server log writers.
- System+Server Warning – writes a warning log entry to both the system and server log writers.
- System+Server Error – writes an error log entry to both the system and server log writers.
- HTTP Post – initiates a basic HTTP Post operation to the specified URL. Good for basic HTTP operations; however, more complicated operations might be better performed using the HTTP-Rest plug-in
- HTTP Get – initiates a basic HTTP Get operation to the specified URL. Good for basic HTTP operations; however, more complicated operations might be better performed using the HTTP-Rest plug-in.