Top 1.8% forks on proxy.golang.org
proxy.golang.org : go.temporal.io/sdk/workflow
Package workflow contains functions and types used to implement Temporal workflows. A workflow is an implementation of coordination logic. The Temporal programming framework (aka SDK) allows you to write the workflow coordination logic as simple procedural code that uses standard Go data modeling. The client library takes care of the communication between the worker service and the Temporal service, and ensures state persistence between events even in case of worker failures. Any particular execution is not tied to a particular worker machine. Different steps of the coordination logic can end up executing on different worker instances, with the framework ensuring that necessary state is recreated on the worker executing the step. In order to facilitate this operational model both the Temporal programming framework and the managed service impose some requirements and restrictions on the implementation of the coordination logic. The details of these requirements and restrictions are described in the "Implementation" section below. The sample code below shows a simple implementation of a workflow that executes one activity. The workflow also passes the sole parameter it receives as part of its initialization as a parameter to the activity. The following sections describe what is going on in the above code. In the Temporal programming model a workflow is implemented with a function. The function declaration specifies the parameters the workflow accepts as well as any values it might return. The first parameter to the function is ctx workflow.Context. This is a required parameter for all workflow functions and is used by the Temporal client library to pass execution context. Virtually all the client library functions that are callable from the workflow functions require this ctx parameter. This **context** parameter is the same concept as the standard context.Context provided by Go. The only difference between workflow.Context and context.Context is that the Done() function in workflow.Context returns workflow.Channel instead of the standard go chan. The second string parameter is a custom workflow parameter that can be used to pass in data into the workflow on start. A workflow can have one or more such parameters. All parameters to an workflow function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointer. Since it only declares error as the return value it means that the workflow does not return a value. The error return value is used to indicate an error was encountered during execution and the workflow should be failed. In order to support the synchronous and sequential programming model for the workflow implementation there are certain restrictions and requirements on how the workflow implementation must behave in order to guarantee correctness. The requirements are that: A simplistic way to think about these requirements is that the workflow code: Now that we laid out the ground rules we can take a look at how to implement some common patterns inside workflows. The Temporal client library provides a number of functions and types as alternatives to some native Go functions and types. Usage of these replacement functions/types is necessary in order to ensure that the workflow code execution is deterministic and repeatable within an execution context. Coroutine related constructs: Time related functions: To mark a workflow as failed all that needs to happen is for the workflow function to return an error via the err return value. Returning an error and a result from a workflow are mutually exclusive. If an error is returned from a workflow then any results returned are ignored. The primary responsibility of the workflow implementation is to schedule activities for execution. The most straightforward way to do that is via the library method workflow.ExecuteActivity: Before calling workflow.ExecuteActivity(), ActivityOptions must be configured for the invocation. These are for the most part options to customize various execution timeouts. These options are passed in by creating a child context from the initial context and overwriting the desired values. The child context is then passed into the workflow.ExecuteActivity() call. If multiple activities are sharing the same exact option values then the same context instance can be used when calling workflow.ExecuteActivity(). The first parameter to the call is the required workflow.Context object. This type is an exact copy of context.Context with the Done() method returning workflow.Channel instead of native go chan. The second parameter is the function that we registered as an activity function. This parameter can also be the a string representing the fully qualified name of the activity function. The benefit of passing in the actual function object is that in that case the framework can validate activity parameters. The remaining parameters are the parameters to pass to the activity as part of the call. In our example we have a single parameter: **value**. This list of parameters must match the list of parameters declared by the activity function. Like mentioned above the Temporal client library will validate that this is indeed the case. The method call returns immediately and returns a workflow.Future. This allows for more code to be executed without having to wait for the scheduled activity to complete. When we are ready to process the results of the activity we call the Get() method on the future object returned. The parameters to this method are the ctx object we passed to the workflow.ExecuteActivity() call and an output parameter that will receive the output of the activity. The type of the output parameter must match the type of the return value declared by the activity function. The Get() method will block until the activity completes and results are available. The result value returned by workflow.ExecuteActivity() can be retrieved from the future and used like any normal result from a synchronous function call. If the result above is a string value we could use it as follows: In the example above we called the Get() method on the returned future immediately after workflow.ExecuteActivity(). However, this is not necessary. If we wish to execute multiple activities in parallel we can repeatedly call workflow.ExecuteActivity() store the futures returned and then wait for all activities to complete by calling the Get() methods of the future at a later time. To implement more complex wait conditions on the returned future objects, use the workflow.Selector class. Take a look at our Pickfirst sample for an example of how to use of workflow.Selector. workflow.ExecuteChildWorkflow enables the scheduling of other workflows from within a workflow's implementation. The parent workflow has the ability to "monitor" and impact the life-cycle of the child workflow in a similar way it can do for an activity it invoked. Before calling workflow.ExecuteChildWorkflow(), ChildWorkflowOptions must be configured for the invocation. These are for the most part options to customize various execution timeouts. These options are passed in by creating a child context from the initial context and overwriting the desired values. The child context is then passed into the workflow.ExecuteChildWorkflow() call. If multiple activities are sharing the same exact option values then the same context instance can be used when calling workflow.ExecuteChildWorkflow(). The first parameter to the call is the required workflow.Context object. This type is an exact copy of context.Context with the Done() method returning workflow.Channel instead of the native go chan. The second parameter is the function that we registered as a workflow function. This parameter can also be a string representing the fully qualified name of the workflow function. What's the benefit? When you pass in the actual function object, the framework can validate workflow parameters. The remaining parameters are the parameters to pass to the workflow as part of the call. In our example we have a single parameter: value. This list of parameters must match the list of parameters declared by the workflow function. The method call returns immediately and returns a workflow.Future. This allows for more code to be executed without having to wait for the scheduled workflow to complete. When we are ready to process the results of the workflow we call the Get() method on the future object returned. The parameters to this method are the ctx object we passed to the workflow.ExecuteChildWorkflow() call and an output parameter that will receive the output of the workflow. The type of the output parameter must match the type of the return value declared by the workflow function. The Get() method will block until the workflow completes and results are available. The workflow.ExecuteChildWorkflow() function is very similar to the workflow.ExecuteActivity() function. All the patterns described for using the workflow.ExecuteActivity() apply to the workflow.ExecuteChildWorkflow() function as well. Child workflows can also be configured to continue to exist once their parent workflow is closed. When using this pattern, extra care needs to be taken to ensure the child workflow is started before the parent workflow finishes. Activities and child workflows can fail. Activity errors are *temporal.ActivityError and errors during child workflow execution are *temporal.ChildWorkflowExecutionError. The cause of the errors may be types like *temporal.ApplicationError, *temporal.TimeoutError, *temporal.CanceledError, and *temporal.PanicError. See ExecuteActivity() and ExecuteChildWorkflow() for details. Signals provide a mechanism to send data directly to a running workflow. Previously, you had two options for passing data to the workflow implementation: With start parameters, we could only pass in values before workflow execution begins. Return values from activities allowed us to pass information to a running workflow, but this approach comes with its own complications. One major drawback is reliance on polling. This means that the data needs to be stored in a third-party location until it's ready to be picked up by the activity. Further, the lifecycle of this activity requires management, and the activity requires manual restart if it fails before acquiring the data. Signals, on the other hand, provides a fully asynch and durable mechanism for providing data to a running workflow. When a signal is received for a running workflow, Temporal persists the event and the payload in the workflow history. The workflow can then process the signal at any time afterwards without the risk of losing the information. The workflow also has the option to stop execution by blocking on a signal channel. In the example above, the workflow code uses workflow.GetSignalChannel to open a workflow.Channel for the named signal. We then use a workflow.Selector to wait on this channel and process the payload received with the signal. ## Handle Update Updates provide a fully async and durable mechanism to send data directly to a running workflow and receive a response back. Unlike a Query handler, an update handler has no restriction over normal workflow code so you can modify workflow state, schedule activities, launch child workflow, etc. For more information see our docs on handling updates ## Validate Updates Note: This is a feature for advanced users for pre-persistence, read-only validation. Other more advanced validation can and should be done in the handler. Update validators provide a mechanism to perform read-only validation (i.e. not modify workflow state or schedule any commands). If the update validator returns any error the update will fail and not be written into history. For more information see our docs on validator functions Workflows that need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the workflow is inside the body of the for loop. The problem with this approach is that the history for that workflow will keep growing to a point where it reaches the maximum size enforced by the service. ContinueAsNew is the low level construct that enables implementing such workflows without the risk of failures down the road. The operation atomically completes the current execution and starts a new execution of the workflow with the same workflow ID. The new execution will not carry over any history from the old execution. To trigger this behavior, the workflow function should terminate by returning the special ContinueAsNewError error: For a complete example implementing this pattern please refer to our Cron example. workflow.SideEffect executes the provided function once, records its result into the workflow history, and doesn't re-execute upon replay. Instead, it returns the recorded result. Use it only for short, nondeterministic code snippets, like getting a random value or generating a UUID. It can be seen as an "inline" activity. However, one thing to note about workflow.SideEffect is that whereas for activities Temporal guarantees "at-most-once" execution, no such guarantee exists for workflow.SideEffect. Under certain failure conditions, workflow.SideEffect can end up executing the function more than once. The only way to fail SideEffect is to panic, which causes workflow task failure. The workflow task after timeout is rescheduled and re-executed giving SideEffect another chance to succeed. Be careful to not return any data from the SideEffect function any other way than through its recorded return value. A workflow execution could be stuck at some state for longer than expected period. Temporal provide facilities to query the current call stack of a workflow execution. You can use tctl to do the query, for example: The above cli command uses __stack_trace as the query type. The __stack_trace is a built-in query type that is supported by temporal client library. You can also add your own custom query types to support thing like query current state of the workflow, or query how many activities the workflow has completed. To do so, you need to setup your own query handler using workflow.SetQueryHandler in your workflow code: The above sample code sets up a query handler to handle query type "state". With that, you should be able to query with cli: Besides using tctl, you can also issue query from code using QueryWorkflow() API on temporal Client object. For some client code to be able to invoke a workflow type, the worker process needs to be aware of all the implementations it has access to. A workflow is registered with the following call: This call essentially creates an in memory mapping inside the worker process between the fully qualified function name and the implementation. If the worker receives tasks for a workflow type it does not know it will fail that task. However, the failure of the task will not cause the entire workflow to fail. Similarly, we need to have at least one worker that hosts the activity functions: See the activity package for more details on activity registration. The Temporal client library provides a test framework to facilitate testing workflow implementations. The framework is suited for implementing unit tests as well as functional tests of the workflow logic. The code below implements the unit tests for the SimpleWorkflow sample. First, we define a "test suite" struct that absorbs both the basic suite functionality from testify via suite.Suite and the suite functionality from the Temporal test framework via go.temporal.io/sdk/testsuite.WorkflowTestSuite. Since every test in this suite will test our workflow we add a property to our struct to hold an instance of the test environment. This will allow us to initialize the test environment in a setup method. For testing workflows we use a go.temporal.io/sdk/testsuite.TestWorkflowEnvironment. We then implement a SetupTest method to set up a new test environment before each test. Doing so ensures that each test runs in its own isolated sandbox. We also implement an AfterTest function where we assert that all mocks we set up were indeed called by invoking s.env.AssertExpectations(s.T()). Finally, we create a regular test function recognized by "go test" and pass the struct to suite.Run. The simplest test case we can write is to have the test environment execute the workflow and then evaluate the results. Calling s.env.ExecuteWorkflow(...) will execute the workflow logic and any invoked activities inside the test process. The first parameter to s.env.ExecuteWorkflow(...) is the workflow functions and any subsequent parameters are values for custom input parameters declared by the workflow function. An important thing to note is that unless the activity invocations are mocked or activity implementation replaced (see next section), the test environment will execute the actual activity code including any calls to outside services. In the example above, after executing the workflow we assert that the workflow ran through to completion via the call to s.env.IsWorkflowComplete(). We also assert that no errors where returned by asserting on the return value of s.env.GetWorkflowError(). If our workflow returned a value, we we can retrieve that value via a call to s.env.GetWorkflowResult(&value) and add asserts on that value. When testing workflows, especially unit testing workflows, we want to test the workflow logic in isolation. Additionally, we want to inject activity errors during our tests runs. The test framework provides two mechanisms that support these scenarios: activity mocking and activity overriding. Both these mechanisms allow you to change the behavior of activities invoked by your workflow without having to modify the actual workflow code. Lets first take a look at a test that simulates a test failing via the "activity mocking" mechanism. In this test we want to simulate the execution of the activity SimpleActivity invoked by our workflow SimpleWorkflow returning an error. We do that by setting up a mock on the test environment for the SimpleActivity that returns an error. With the mock set up we can now execute the workflow via the s.env.ExecuteWorkflow(...) method and assert that the workflow completed successfully and returned the expected error. Simply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate workflow logic. However, sometimes we want to replace the activity with an alternate implementation to support a more complex test scenario. For our simple workflow lets assume we wanted to validate that the activity gets called with the correct parameters. In this example, we provide a function implementation as the parameter to Return. This allows us to provide an alternate implementation for the activity SimpleActivity. The framework will execute this function whenever the activity is invoked and pass on the return value from the function as the result of the activity invocation. Additionally, the framework will validate that the signature of the "mock" function matches the signature of the original activity function. Since this can be an entire function, there really is no limitation as to what we can do in here. In this example, to assert that the "value" param has the same content to the value param we passed to the workflow. NOTE: The default MaximumAttempts for retry policy set by server is 0 which means unlimited retries. However, during a unit test the default MaximumAttempts is 10 to avoid a test getting stuck.
Registry
-
Source
- Documentation
- JSON
- codemeta.json
purl: pkg:golang/go.temporal.io/sdk/workflow
Keywords:
golang
, open-source
, sdk-go
, service-bus
, workflow-automation
, workflow-engine
, workflow-management-system
License: MIT
Latest release: 25 days ago
Namespace: go.temporal.io/sdk
Stars: 739 on GitHub
Forks: 260 on GitHub
Total Commits: 1517
Committers: 166
Average commits per author: 9.139
Development Distribution Score (DDS): 0.871
More commit stats: commits.ecosyste.ms
See more repository details: repos.ecosyste.ms
Last synced: 25 days ago