This section describes how to install and configure Alfresco Process Services.
You can upgrade from earlier versions to Alfresco Process Services 1.11.
There are two methods for upgrading:
Using the Process Services installation wizard
Using the WAR file distribution
You can use the Alfresco Process Services installation wizard to upgrade to the latest version. The process is similar to installing for the first time. For more details, see the Installing using setup wizards [10] section.
To upgrade:
Alternatively, copy the license to your home directory using the terminal (OSX) or command prompt (Windows):
~/.activiti/enterprise-license/ or C:\.activiti\enterprise-license
Tip: You can also upload a license from the user interface. See the Uploading a license file [11] section for more details.
You can upgrade using the WAR file in your application server distribution. These instructions use the WAR file from the Apache Tomcat based distribution, however you can choose from different distributions for various application servers.
Review the Supported Stacks [12] list to see what’s supported.
To upgrade using the War file:
Any database upgrade changes should have now been applied.
You can run the application on multiple servers, for performance, resilience or for failover reasons. The application architecture is designed to be stateless. This means that any server can handle any request from any user. When using multiple servers, it is enough to have a traditional load balancer (or proxy) in front of the servers running the Alfresco Process Services application. Scaling out is done in a "horizontal" way, by adding more servers behind the load balancer.
Note that each of the servers will connect to the same relational database. While scaling out by adding more servers, make sure that the database can handle the additional load.
Configure Alfresco Process Services using a properties file named activiti-app.properties. This file must be placed on the application server’s classpath to be found.
Additionally, the properties file is available with the following options:
An activiti-app.properties file with default values in the WAR file (or exploded WAR folder) under the WEB-INF/classes/META-INF/activiti-app folder.
An activiti-app.properties file with custom values on the classpath. For example, the WEB-INF/classes folder of the WAR, the /lib folder of Tomcat, or other places specific to the web container being used.
The values of a configuration file on the classpath have precedence over the values in the WEB-INF/classes/META-INF/activiti-app/activiti-app.properties file.
For the Alfresco Process Services user interface, there is an additional configuration file named app-cfg.js. This file is located inside the .war file’s script directory.
At a minimum, the application requires the following settings to run:
A database connection that is configured either Using JDBC Connection Parameters [14] or Using a JNDI Data Source [15]
An accurate Hibernate dialect - see Hibernate Settings [16]
All other properties use the default settings, and this will be allow the application to start up and run.
By default, the following properties are defined.
Property |
Description |
Default |
server.contextroot |
The context root on which the user accesses the application. This is used in various places to generate URLs to correct resources. |
activiti-app |
security.rememberme.key |
Used for cookie validation. In a multi-node setup, all nodes must have the same value for this property. |
somekey |
security.csrf.disabled |
When true, the cross-site forgery (CSRF) protection is disabled. |
false |
security.signup.disabled |
When true, the Alfresco Process Services sign up functionality is disabled. An error message sign up is not possible will be displayed. |
false |
You need to know what encryption algorithms are supported. If you’re using the JVM to which the application will be deployed you can do this using the listAlgorithms tool that Jasypt provides: http://www.jasypt.org/cli.html [44]
If you do not specify an algorithm to Jasypt, then you effectively obtain the default of PBEWithMD5AndDES. Some algorithms may appear in the list but may not be usable as the JRE policy blocks them.
If you want to increase your range of choices then you can modify the JRE policies: https://www.ca.com/us/services-support/ca-support/ca-support-online/knowledge-base-articles.tec1698523.html [45]There is an equivalent for the IBM JRE: https://www-01.ibm.com/marketing/iwm/iwm/web/reg/pick.do?source=jcesdk. [46]
Algorithms using AES are generally considered most secure. TripleDES also passes security checks at present. You should consult your security department for advice specific to your organization and the needs of your server.
You can use the encrypt script that comes with Jasypt to encrypt the value against your chosen secret password. In addition to their documentation, see this guide: http://www.programering.com/a/MjN1kTNwATg.html [47].
We recommend to avoid using quotes. Also check that you can decrypt the value, preferably using the intended JRE.
See the application installation instructions.
If the property is called datasource.password, remove the existing entry and put in a new entry of the form datasource.password=ENC(<ENCRYPTEDPASSWORD>) where ENCRYPTEDPASSWORD is the value encrypted by Jasypt.
If, for example, you are using Tomcat on Unix then you could include a shell script called setenv.sh in tomcat_home/bin with the following content:
export JAVA_OPTS="$JAVA_OPTS -Djasypt.encryptor.password=secretpassword -Djasypt.encryptor.algorithm=PBEWITHSHA1ANDDESEDE"This assumes that your password is ‘secretpassword’ and you are using the algorithm PBEWITHSHA1ANDDESEDE. The configuration could alternatively be done in startup.sh.
If you then run using catalina.sh you will see the secret password in the logging on application startup. This is a Tomcat feature, which you can disable by removing <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> from your Tomcat's server.xml https://stackoverflow.com/questions/35485826/turn-off-tomcat-logging-via-spring-boot-application [48]You may initially, however, want to leave this on for diagnostic purposes until you’ve proven you’ve got encryption working. For an example of this, see https://stackoverflow.com/questions/17019233/pass-user-defined-environment-variable-to-tomcat [49]
For other servers there will be other ways of setting environment/JVM variables. These values can be read as JVM parameters, environment variables or as property file entries (though you would not want to put the secret encryption password in a property file). Therefore, with WebSphere they could set using JVM parameter config http://www-01.ibm.com/support/docview.wss?uid=swg21417365 [50] or environment variable config https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/welcvariables.html. [51]
The application should now start as normal. If it doesn’t, try without the encrypted values and without the encryption parameters to determine whether the problem is related to the encryption setup. Check that you are able to encrypt and decrypt with Jasypt to rule out any issues due to copy-paste errors.
Some property values (though not sensitive ones) are logged by Alfresco applications if the log level is set high. If you want to restrict this then reduce the log level inlog4j.properties
Set the following properties to change the database.
Using JDBC Connection Parameters
Property |
Description |
---|---|
datasource.driver |
The JDBC driver used to connect to the database. Note that the driver must be on the classpath of the web application. |
datasource.url |
The JDBC URL used to connect to the database. |
datasource.username |
The user of the database system that is used to connect to the database. |
datasource.password |
The password of the above user. |
Example:
datasource.driver=com.mysql.jdbc.Driver datasource.url=jdbc:mysql://127.0.0.1:3306/activiti?characterEncoding=UTF-8 datasource.username=alfresco datasource.password=alfresco
Connection Pooling
When using JDBC Connection Parameters, you can configure the following connection pool settings to suit the anticipated load.
Property |
Description |
Value |
datasource.min-pool-size |
The minimum number of connections in the connection pool. |
5 |
datasource.max-pool-size |
The maximum number of connections in the connection pool. |
100 |
datasource.acquire-increment |
The number of additional connections the system will try to acquire each time the connection pool is exhausted. |
5 |
datasource.preferred-test-query |
The query used to verify that the connection is still valid |
No default value (not a required property). The value depends on the database: select 1 for H2, MySQL, PostgreSQL and Microsoft SQL Server, SELECT 1 FROM DUAL for Oracle and SELECT current date FROM sysibm.sysdummy1 for DB2. |
datasource.test-connection-on-checkin |
Boolean value. If true, an operation will be performed asynchronously on every connection checkin to verify that the connection is valid. For best performance, a proper datasource.preferred-test-query should be set. |
true |
datasource.test-connection-on-checkout |
Boolean value. If true, an operation will be performed asynchronously on every connection checkout to verify that the connection is valid. Testing Connections on checkout is the simplest and most reliable form of Connection testing. For best performance, a proper datasource.preferred-test-query should be set. |
true |
datasource.max-idle-time |
The number of seconds a connection can be pooled before being discarded. |
1800 |
datasource.max-idle-time-excess-connections |
Number of seconds that connections in excess of minPoolSize should be permitted to remain idle in the pool before being discarded. The intention is that connections remain in the pool during a load spike. |
1800 |
The connection pooling framework used is C3P0 [52]. It has extensive documentation on the settings described above.
Using a JNDI Data source
If a JNDI data source is configured in the web container or application server, the JNDI name should be set with the following properties:
Property |
Description |
Value |
datasource.jndi.name |
The JNDI name of the datasource. This varies depending on the application server or web container. |
jdbc/activitiDS |
datasource.jndi.resourceRef |
Set whether the look up occurs in a J2EE container, that is, if the prefix java:comp/env/ needs to be added if the JNDI name doesn’t already contain it. |
true |
Example (on JBoss EAP 6.3):
datasource.jndi.name=java:jboss/datasources/activitiDS
Hibernate settings
The Alfresco Process Services specific logic is written using JPA 2.0 with Hibernate as implementation. Note that the Process Engine itself uses MyBatis [53] for full control of each SQL query.
Set the following properties.
Property |
Description |
Mandatory |
hibernate.dialect |
The dialect implementation that Hibernate uses. This is database specific. |
Yes. Very important to set the correct dialect, otherwise the app might not boot up. |
The following values are used to test Alfresco Process Services.
Database |
Dialect |
H2 |
org.hibernate.dialect.H2Dialect |
MySQL |
org.hibernate.dialect.MySQLDialect |
Oracle |
org.hibernate.dialect.Oracle10gDialect |
SQL Server |
org.hibernate.dialect.SQLServerDialect |
DB2 |
org.hibernate.dialect.DB2Dialect |
PostgreSQL |
org.hibernate.dialect.PostgreSQLDialect |
Optionally, the hibernate.show_sql property can be set to true if the SQL being executed needs to be printed to the log.
To change the display language for Alfresco Process Services, configure the appropriate language in your browser settings.
The Identity Service [54] allows you to configure user authentication between a supported LDAP provider or SAML identity provider and the Identity Service for Single Sign On (SSO) capabilities.
The Identity Service needs to be deployed [55] and configured [56] with an identity provider before being set up with other Alfresco products.
Once the Identity Service has been deployed, you will need to configure Process Services [57] to authenticate with it.
Configure the activiti-identity-service.properties file using the below properties:
Property | Description | Notes |
---|---|---|
keycloak.enabled | Enable or disable authentication via the Identity Service. | Required. |
keycloak.realm | Name of the realm configured in the Identity Service. | Required. |
keycloak.auth-server-url | Base URL of the Identity Service server. Will be in the format https://{server}:{port}/auth | Required. |
keycloak.ssl-required | Whether communication to and from the Identity Service server is over HTTPS. Possible values are all for all requests, external for external requests or none. | Important: this property needs to match the equivalent setting for Require SSL in your realm within the Identity Service administration console. |
keycloak.resource | The Client ID for the client created within your realm that points to Process Services. | Required. |
keycloak.principal-attribute | The attribute used to populate the field UserPrincipal with. If this is null it will default to sub. | Important: this property needs to be set to email to work with Process Services. |
keycloak.public-client | The adapter will not send credentials for the client to the Identity Service if this is set to true. | Optional. |
keycloak.credentials.secret | The secret key for this client if the access type is not set to public. | |
keycloak.always-refresh-token | The token will be refreshed for every request if this is set to true. | |
keycloak.autodetect-bearer-only | This should be set to true if your application serves both a web application and web services. It allows for the redirection of unauthorized users of the web application to the Identity Service sign in page, but send a HTTP 401 to unauthenticated SOAP or REST clients. | Required. |
keycloak.token-store | The location of where the account information token is stored. Possible values are cookie or session. | Required. |
keycloak.enable-basic-auth | Whether basic authentication is supported by the adapter. If set to true then a secret must also be provided. | Optional. |
activiti.use-browser-based-logout | Sets whether signing out of Process Services calls the Identity Service
logout URL. If set to true, set the Admin URL to https://{server}:{port}/activiti-app/ under the client settings in the Identity Service management console. |
Optional. |
Prerequisites
You must ensure that you have configured LDAP (LDAP synchronization in particular). You can use Kerberos SSO in combination with LDAP authentication and also database authentication. You can use both of these as fallback scenarios in the case that the user's browser does not support Kerberos authentication.
ktpass -princ HTTP/<host>.<domain>@<REALM> -pass <password> -mapuser <domainnetbios>\http<host> -crypto all -ptype KRB5_NT_PRINCIPAL -out c:\temp\http<host>.keytab -kvno 0
setspn -a HTTP/<host> http<host> setspn -a HTTP/<host>.<domain> http<host>
Copy the key table files created in steps 1 and 2 to the servers they were named after. Copy the files to a protected area, such as C:\etc\ or /etc.
The default location is %WINDIR%\krb5.ini, where %WINDIR% is the location of your Windows directory, for example, C:\Windows\krb5.ini. If the file does not already exist (for example, if the Kerberos libraries are not installed on the target server), you must copy these over or create them from scratch. See Kerberos Help [60] for more information on the krb5.conf file. In this example, our Windows domain controller host name is adsrv.alfresco.org.
[libdefaults] default_realm = ALFRESCO.ORG default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac [realms] ALFRESCO.ORG = { kdc = adsrv.alfresco.org admin_server = adsrv.alfresco.org } [domain_realm] adsrv.alfresco.org = ALFRESCO.ORG .adsrv.alfresco.org = ALFRESCO.ORG
The Kerberos ini file for Linux is /etc/krb5.conf.
For JBoss, open the $JBOSS_HOME/standalone/configuration/standalone.xml file.
In the <subsystem xmlns="urn:jboss:domain:security:1.2"> section, add the following:
<security-domain name="alfresco" cache-type="default"> <authentication> <login-module code="com.sun.security.auth.module.Krb5LoginModule" flag="sufficient"/> </authentication> </security-domain>
Add the following security-domain sections:
<security-domain name="AlfrescoHTTP" cache-type="default"> <authentication> <login-module code="com.sun.security.auth.module.Krb5LoginModule" flag="required"> <module-option name="debug" value="true"/> <module-option name="storeKey" value="true"/> <module-option name="useKeyTab" value="true"/> <module-option name="doNotPrompt" value="true"/> <module-option name="isInitiator" value="false"/> <module-option name="keyTab" value="C:/etc/http<host>.keytab"/> <module-option name="principal" value="HTTP/<host>.<domain>"/> </login-module> </authentication> </security-domain>
For other environments, in the Java security folder (for example, C:/Alfresco/java/lib/security), create a file named java.login.config with entries as shown below.
Alfresco { com.sun.security.auth.module.Krb5LoginModule sufficient; }; AlfrescoHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true doNotPrompt=true keyTab="C:/etc/http<host>.keytab" principal="HTTP/<host>.<domain>"; }; com.sun.net.ssl.client { com.sun.security.auth.module.Krb5LoginModule sufficient; }; other { com.sun.security.auth.module.Krb5LoginModule sufficient; };
login.config.url.1=file:${java.home}/lib/security/java.login.config
Property name | Description | Default value |
---|---|---|
kerberos.authentication.enabled | A switch for activating functionality for Kerberos SSO authentication. This applies to both the APS user interface and the REST API. | FALSE |
kerberos.authentication.principal | The Service Principal Name (SPN). For example, HTTP/alfresco.test.activiti.local. | None |
kerberos.authentication.keytab | The file system path to the key table file. For example, C:/alfresco/alfrescohttp.keytab. | None |
kerberos.authentication.krb5.conf | The file system path to the local server. For example, C:/Windows/krb5.ini. | None |
kerberos.allow.ldap.authentication.fallback | Determines whether to allow login for unsupported client browsers using LDAP credentials. | FALSE |
kerberos.allow.database.authentication.fallback | Determines whether to allow login for unsupported client browsers using database credentials. | FALSE |
kerberos.allow.samAccountName.authentication | Authentication of the user id using the short form (for example username instead of username@domain.com). | FALSE |
security.authentication.use-externalid | A setting that enables the use of Kerberos authentication. | FALSE |
security.oauth2.authentication.enabled=true security.oauth2.client.clientId=<client_id> security.oauth2.client.clientSecret=<secret_key> security.oauth2.client.userAuthorizationUri=https://github.com/login/oauth/authorize security.oauth2.client.tokenName=oauth_token security.oauth2.client.accessTokenUri=https://github.com/login/oauth/access_token security.oauth2.client.userInfoUri=https://api.github.com/user
Property | Description |
---|---|
security.oauth2.authentication.enabled | Enables or disables the OAuth 2 client. To enable the OAuth 2 client, set this property to true. To disable it, set this property to false. |
security.oauth2.client.clientId | Client ID provided by the OAuth 2 Authorization server. |
security.oauth2.client.clientSecret | Client Secret provided by the OAuth 2 Authorization server. |
security.oauth2.client.checkToken | Configures the OAuth 2 Authorization to be used. Only set this property if you are using an internal authentication server. It contains the authorization URL obtained from the Authorization server. Example: security.oauth2.client.checkToken=http://localhost:9999/oauth/check_token |
security.oauth2.client.userAuthorizationUri | Implementation of the Authorization endpoint from the OAuth 2 specification. Accepts authorization requests, and handles user approval if the grant type is authorization code. |
security.oauth2.client.tokenName | Name of the token that will be used as parameter in the request. |
security.oauth2.client.accessTokenUri | Endpoint for token requests as described in the OAuth 2 specification. Once login access to the application on the authorisation server has been allowed, the server provides the client (APS application) with the access token. This is exchanged with the authorisation server residing on the Uri set within this property. |
security.oauth2.client.userInfoUri | Uri of the user. This is used to retrieve user details from the authorisation server. |
# CORS CONFIGURATION # cors.enabled=true
When CORS is enabled, CORS requests can be made to all endpoints under {{/activiti-app/api}}.
Also, some additional properties are made available which can be configured to further fine tune CORS. This will make CORS available only to certain origins or to restrict the valid HTTP methods that can be used and headers that can be sent with CORS-enabled requests.
cors.enabled=false cors.allowed.origins=* cors.allowed.methods=GET,POST,HEAD,OPTIONS,PUT,DELETE cors.allowed.headers=Authorization,Content-Type,Cache-Control,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers,X-CSRF-Token cors.exposed.headers=Access-Control-Allow-Origin,Access-Control-Allow-Credentials cors.support.credentials=truecors.preflight.maxage=10
Property | Description |
---|---|
cors.allowed.origins | Specifies the hosts allowed in cross origin requests. By default, the value is
set to *, which permits clients hosted on any server to access the
resources. Alternatively, you can specify a host, for example, http://www.example.org:8080 [61], which will only allow requests from this host. Multiple entries or wildcards are not allowed for this setting. In general, it is recommended to restrict {{allowedOrigins}} to only allow origins within your organization to make requests. |
cors.allowed.methods | Configures which HTTP requests are permitted.
|
cors.allowed.headers | Specifies the headers that can be set manually or programmatically in the request
headers in addition to the ones set by the user agent (for example, Connection). The
default values are:
|
cors.exposed.headers | Allows you to whitelist the headers that the client can access from the server.
The default value exposes the following headers:
|
cors.support.credentials | Determines whether HTTP cookie and HTTP Authentication-based credentials are allowed. The default value is true. |
cors.preflight.maxage | Preflighted requests use the OPTIONS method to first verify the resource availability and then request it. This property determines the maximum time (in minutes) for caching a preflight request. The default value is 10. |
cors.enabled=false
Business Calendar is used to calculate relative due dates for tasks. To exclude weekends when calculating a task’s relative due date, set the calendar.weekends property as follows:
# Weekend days comma separated (day's first 3 letters in capital) calendar.weekends=SAT,SUN
To invalidate the login session, do the following:
security.use-http-session=true
Set this property to false if you do not wish to enable this behavior.
When the application starts for the first time, it will verify that there is at least one user in the system. If not, a user with superuser rights will be created.
The default user ID to sign in with is admin@app.activiti.com using password admin. This should be changed after signing in for the first time.
The initial user details can be modified (must be done before first start up) with following properties:
Property |
Description |
admin.email |
The email address used to create the first user, which also acts as the sign in identifier. |
admin.group |
Capabilities in Alfresco Process Services are managed by adding users into certain groups. The first user will have all capabilities enabled. This property defines the name of the group to which the first user will be added. By default it is Superusers. |
The application sends out emails to users on various events. For example, when a task is assigned to the user.
Set the following properties to configure the email server.
Property |
Description |
email.enabled |
Enables or disables the email functionality as a whole. By default, it is set to false, therefore make sure to set it to true when you require the email functionality. |
email.host |
The host address of the email server. |
email.port |
The port on which the email server is running. |
email.useCredentials |
Boolean value. Indicates if the email server needs credentials to make a connection. If so, both username and password need to be set. |
email.username |
The username used as credentials when email.useCredentials is true. |
email.password |
The password used as credentials when email.useCredentials is true. |
email.ssl |
Defines if SSL is needed for the connection to the email server. |
email.tls |
Defines if TLS is needed for the connection to the email server. This needs to be true when Google mail is used as the mail server for example. |
email.from.default |
The email address that is used in the from field of any email sent. |
email.from.default.name |
The name that is used in the from field of the email sent. |
email.feedback.default |
Some emails will have a feedback email address that people can use to send feedback. This property defines this. |
Emails are created by a template engine. The emails can contain various links to the runtime system to bring the user straight to the correct page in the web application.
Set the following property to correct the links. The example in the following table uses 'localhost' as host address and 'activiti-app' as the context root:
Property |
Example |
email.base.url |
Elasticsearch is used in Alfresco Process Services as a data store for generating analytics and reports. Elasticsearch [63] is an open source data store for JSON [64] documents. Its main features include fast full text search and analytics.
Alfresco Process Services uses a REST connection to communicate with a remote instance of Elasticsearch. The application creates a Java Low Level REST client, which allows you to configure Process Services to index event data into a remote Elasticsearch service. The REST client internally uses the Apache HTTP Async Client to send HTTP requests. This allows communication with an Elasticsearch cluster through HTTP.
A REST connection between Elasticsearch and Alfresco Process Services has three points to be aware of:
For more details regarding the REST client, see Java Low Level REST Client [65].
If migrating from an embedded Elasticsearch instance, see rebuilding Elasticsearch instances [66] after configuring a connection to an external Elasticsearch instance via REST.
For information about the compatibility between the REST client and the remote Elasticsearch cluster environment, see Communicating with an Elasticsearch Cluster using HTTP [67].
The following properties need to be configured in activiti-app.properties for Elasticsearch:
Property | Description | Example value |
---|---|---|
elastic-search.server.type | The server type for Elasticsearch configuration. Set this to rest to enable the REST client implementation. | rest |
elastic-search.rest-client.port | The port running Elasticsearch. | 9200 |
elastic-search.rest-client.connect-timeout | Connection timeout for the REST client. | 1000 |
elastic-search.rest-client.socket-timeout | Socket timeout for the REST client. | 5000 |
elastic-search.rest-client.address | IP address of the REST client. | localhost |
elastic-search.rest-client.schema | Sets whether the connection uses http or https. | http |
elastic-search.rest-client.auth.enabled | Sets whether authentication is enabled for the REST connection. | false |
elastic-search.rest-client.username | The username of the Elasticsearch user. | admin |
elastic-search.rest-client.password | The password for the Elasticsearch user. | esadmin |
elastic-search.rest-client.keystore | The keystore used to encrypt the connection to the Elasticsearch instance. | |
elastic-search.rest-client.keystore.type | The type of keystore used for encryption. | jks |
elastic-search.rest-client.keystore.password | The password of keystore used for encryption. | |
elastic-search.default.index.name | The default prefix for the default tenant. | activiti |
elastic-search.tenant.index.prefix | The prefix used for indexing in multi-tenant setups. | activiti-tenant- |
Backing up the data stored in Elasticsearch is described in detail in the Elastic search documentation [71]. When using the snapshot functionality of ElasticSearch, you must enable the HTTP interface and create firewall rules to prevent the general public from accessing it.
The event processing is closely related to the Elasticsearch configuration [72].
The main concept is depicted in the following diagram.
The event processor is architected to work without collisions in a multi-node clustered setup. Each of the event processors will first try to lock events before processing them. If a node goes down during event processing (after locking), an expired events processor component will pick them up and process them as regular events.
The event processing can be configured, however leaving the default values as they are helps cater for typical scenarios.
Property |
Description |
Default |
event.generation.enabled |
Set to false if no events need to be generated. Do note that the reporting/analytics event data is then lost forever. |
true |
event.processing.enabled |
Set to false to not do event processing. This can be useful in a clustered setup where only some nodes do the processing. |
true |
event.processing.blocksize |
The number of events that are attempted to be locked and fetched to be processed in one transaction. Larger values equate to more memory usage, but less database traffic. |
100 |
event.processing.cronExpression |
The cron expression that defines how often the events generated by the Process Engine are processed (that is, read from the database and fed into Elastic Search). By default 30 seconds. If events do not need to appear quickly in the analytics, it is advised to make this less frequent to put less load on the database. |
0/30 * * * * ? |
event.processing.expired.cronExpression |
The cron expression that defines how often expired events are processed. These are events that were locked, but never processed (such as when the node processing them went down). |
0 0/30 * * * ? |
event.processing.max.locktime |
The maximum time an event can be locked before it is seen as expired. After that it can be taken by another processor. Expressed in milliseconds. |
600000 |
event.processing.processed.events.action |
To keep the database table where the Process Engine writes the events small and efficient, processed events are either moved to another table or deleted. Possible values are move and delete. Move is the safe option, as it allows for reconstructing the Elasticsearch index if the index was to get corrupted for some reason. |
move |
event.processing.processed.action.cronExpression |
The cron expression that defines how often the action above happens. |
0 25/45 * * * ? |
Occasionally, an Elasticsearch index can get corrupted and become unusable. All data that are sent to Elasticsearch is stored in the relational database (except if the property event.processing.processed.events.action has been set to delete, in which case the data is lost).
You might have to rebuild the indexes when changing the core Elasticsearch settings (for example, number of shards).
Events are stored in the ACT_EVT_LOG table before they are processed. The IS_PROCESSED_ flag is set to 0 when inserting an event and changing it to 1 to process for ElasticSearch. An asynchronous component will move those table rows with 1 for the flag to the PROCESSED_ACTIVITI_EVENTS.
Therefore, to rebuild the Elasticsearch index, you must do the following:
Remove the data from Elasticsearch (deleting the data folders for example in the embedded mode)
Copy the rows from PROCESSED_ACTIVITI_EVENTS to ACT_EVT_LOG and setting the IS_PROCESSED flag to 0 again.
Note also, due to historical reasons, the DATA_ column has different types in ACT_EVT_LOG (byte array) and PROCESSED_ACTIVITI_EVENTS (long text). So a data type conversion is needed when moving rows between those tables.
See the example-apps folder that comes with Alfresco Process Services. It has an event-backup-example folder, in which a Maven project can be found that carries out the data type conversion. You can also use this to back up and restore events. Note that this example uses Java, but it can also be done with other languages. It first writes the content of PROCESSED_ACTIVITI_EVENTS to a .csv file. This is also useful when this table becomes too big in size: store the data in a file and remove the rows from the database table.
It is possible to configure whether users get access to the model editors (the App Designer application) and the analytics application.
Access to the default application is configured through capabilities. In the admin UI, it is possible to create system groups. These groups have a set of capabilities. All users part of that group have those capabilities.
The following settings configure app access when a new user is created in the system (manual or through LDAP sync). To enable access, set the property app.[APP-NAME].default.enabled to true. If true, a newly created user will be given access to this app.
The access is configured by adding the user to a group with a certain capability that enabled the app. The name of that group can be configured using the app.[APP-NAME].default.capabilities.group property. If this property is set, and the app.[APP-NAME].default.enabled property is set to true, the group with this name will be used to add the user to and provide access to the app. If the group does not exist, it is created. If the property is commented, and app.[APP-NAME].default.enabled property, a default name is used.
Currently possible app names: { analytics | kickstart }
Property |
default |
---|---|
app.analytics.default.enabled |
true |
app.analytics.default.capabilities.group |
analytics-users |
app.kickstart.default.enabled |
true |
app.kickstart.default.capabilities.group |
kickstart-users |
The following setting, if set to true, will create a default example app with some simple review and approve processes for every newly created user.
Property |
default |
---|---|
app.review-workflows.enabled |
false |
When a task is created that has one or more candidate groups assigned, the group managers for those groups will be automatically involved with the created task. To stop group managers from being involved, set the following property to false.
Property |
default |
app.runtime.groupTasks.involveGroupManager.enabled |
true |
The Process Engine operates in a stateless way. However, there is data that will never change, which makes it a prime candidate for caching.
A process definition is an example of such static data. When you deploy a BPMN 2.0 XML file to the Process Engine, the engine parses it to something it can execute, and stores the XML and some data, such as the description, business key, in the database. Such a process definition will never change. Once it’s in the database, the stored data will remain the same until the process definition is deleted.
On top of that, parsing a BPMN 2.0 XML to something executable is quite a costly operation compared with other engine operations. This is why the Process Engine internally uses a process definition cache to store the parsed version of the BPMN 2.0 XML.
In a multi-node setup, each node will have a cache of process definitions. When a node goes down and comes up, it will rebuild the cache as it handles process instances, tasks. and so on.
The process definition cache size can be set by the following property:
Property |
Description |
Default |
activiti.process-definitions.cache.max |
The number of process definitions kept in memory. When the system needs to cope with many process definitions concurrently, it is advised to make this value higher than the default. |
128 |
Alfresco Process Services enables you to upload content, such as attaching a file to a task or a form.
Content can be stored locally by setting the property below to fs. Alternatively, you can use Amazon S3 for content storage by setting it to s3.
contentstorage.type
To configure file system for content storage, set the following properties in the activiti-app.properties file:
Property | Description | Example |
contentstorage.fs.rootFolder | Name and location of the root folder. Important: When using multiple instances of the application, make sure that this path references a shared network drive. This is so that all nodes are able to access all content as the application is stateless and any server can handle any request. | /data |
contentstorage.fs.createRoot | Sets whether the root folder is created by default. | true |
contentstorage.fs.depth | Depth of the folder tree. | 4 |
contentstorage.fs.blockSize | Maximum number of files in a single folder. | 1024 |
To configure Amazon S3 for content storage, set the following properties in the activiti-app.properties file:
Property | Description |
contentstorage.s3.accessKey | Set to the S3 access key. The access key is required to identify the Amazon Web Services account and can be obtained from the Amazon Web Services site AWS Credentials [73]. |
contentstorage.s3.secretKey | Set to the S3 secret key.The secret key is required to identify the Amazon Web Services account and can be obtained from the Amazon Web Services site AWS Credentials [73]. |
contentstorage.s3.bucketName | Set to the S3 bucket name.The bucket name must be unique among all Amazon Web Services users globally. If the bucket does not already exist, it will be created, but the name must not have already been taken by another user. See S3 bucket restrictions [74] for more information on bucket naming. |
contentstorage.s3.objectKeyPrefix | Set to your AWS object prefix. |
Alfresco Content Services is also storage mechanism, and you can find more information in Connecting to external content systems [75].
The Microsoft Office integration (opening an Office document directly from the browser) doesn’t need any specific configuration. However, the protocol used for the integration mandates the use of HTTPS servers by default. This means that Alfresco Process Services must run on a server that has HTTPS and its certificates are correctly configured.
If this is not possible for some reason, change the setting on the machines for each user to make this feature work.
For Windows, see:
http://support.microsoft.com/kb/2123563 [76]
For OS X, execute following terminal command:
defaults -currentHost write com.microsoft.registrationDB hkey_current_user\\hkey_local_machine\\software\\microsoft\\office\\14.0\\common\\internet\\basicauthlevel -int 2
Note that this is not a recommended approach from a security point of view.
The application uses SLF4J bounded to Log4j. The log4j.properties configuration file can be found in the WEB-INF/classes folder of the WAR file.
See SLF4J [77] and Log4j [78] for more information.
For all REST API endpoints available in the application, metrics are gathered about run-time performance. These statistics can be written to the log.
Property |
Description |
Default |
metrics.console.reporter.enabled |
Boolean value. If true, the REST API endpoint statistics will be logged. |
false |
metrics.console.reporter.interval |
The interval of logging in seconds. Do note that these logs are quite large, so this should not be set to be too frequent. |
60 |
Note that the statistics are based on the run-time timings since the last start up. When the server goes down, the metrics are lost.
Example output for one REST API endpoint:
com.activiti.runtime.rest.TaskQueryResource.listTasks count = 4 mean rate = 0.03 calls/second 1-minute rate = 0.03 calls/second 5-minute rate = 0.01 calls/second 15-minute rate = 0.00 calls/second min = 5.28 milliseconds max = 186.55 milliseconds mean = 50.74 milliseconds stddev = 90.54 milliseconds median = 5.57 milliseconds 75% <= 141.34 milliseconds 95% <= 186.55 milliseconds 98% <= 186.55 milliseconds 99% <= 186.55 milliseconds 99.9% <= 186.55 milliseconds
Alfresco Process Services provides REST API operations that allow you to query tasks, process instances, historic tasks and historic process instances. You can also request to include task and process variables by using the parameters includeTaskLocalVariables and includeProcessVariables and setting their values to 'True'. When executing REST API calls that include these variables, the result sets could be quite large and you may wish to limit or control the list size provided in the response. The following table shows the properties you can set in the activiti-app.properties file to configure this.
Property name | Description |
---|---|
query.task.limit | Limits the number of tasks returned from the query GET /runtime/tasks. |
query.execution.limit | Limits the number of process instances returned from the query GET /runtime/process-instances. |
query.historic.task.limit | Limits the number of historic tasks returned from the query POST /enterprise/historic-tasks/query. |
query.historic.process.limit | Limits the number of historic process instances returned from the query POST /enterprise/historic-process-instances/query. |
It’s possible to hook up a centralized user data store with Alfresco Process Services. Any server supporting the LDAP protocol can be used. Special configuration options and logic has been included to work with Active Directory (AD) systems too.
From a high-level overview, the external Identity Management (IDM) integration works as follows:
Periodically, all user and group information is synchronized asynchronously. This means that all data for users (name, email address, group membership and so on) is copied to the Alfresco Process Services database. This is done to improve performance and to efficiently store more user data that doesn’t belong to the IDM system.
If the user logs in to Alfresco Process Services, the authentication request is passed to the IDM system. On successful authentication there, the user data corresponding to that user is fetched from the Alfresco Process Services database and used for the various requests. Note that no passwords are saved in the database when using an external IDM.
Note that the LDAP sync only needs to be activated and configured on one node in the cluster (but it works when activated on multiple nodes, but this will of course lead to higher traffic for both the LDAP system and the database).
The configuration of the external IDM authentication/synchronization is done in the same way as the regular properties. There is a properties file named activiti-ldap.properties in the WEB-INF/classes/META-INF/ folder in the WAR file. The values in a file with the same name on the classpath have precedence over the default values in the former file.
In addition, in the same folder, the example-activiti-ldap-for-ad.properties file contains an example configuration for an Active Directory system.
The following code snippet shows the properties involved in configuring a connection to an LDAP server (Active Directory is similar). These are the typical parameters used when connecting with an LDAP server. Advanced parameters are commented out in the example below:
# The URL to connect to the LDAP server ldap.authentication.java.naming.provider.url=ldap://localhost:10389 # The default principal to use (only used for LDAP sync) ldap.synchronization.java.naming.security.principal=uid=admin,ou=system # The password for the default principal (only used for LDAP sync) ldap.synchronization.java.naming.security.credentials=secret # The authentication mechanism to use for synchronization #ldap.synchronization.java.naming.security.authentication=simple # LDAPS truststore configuration properties #ldap.authentication.truststore.path= #ldap.authentication.truststore.passphrase= #ldap.authentication.truststore.type= # Set to 'ssl' to enable truststore configuration via subsystem's properties #ldap.authentication.java.naming.security.protocol=ssl # The LDAP context factory to use #ldap.authentication.java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory # Requests timeout, in miliseconds, use 0 for none (default) #ldap.authentication.java.naming.read.timeout=0 # See http://docs.oracle.com/javase/jndi/tutorial/ldap/referral/jndi.html #ldap.synchronization.java.naming.referral=follow
It is possible to configure connection pooling for the LDAP/AD connections. This is an advanced feature and is only needed when creating a connection to the IDM system has an impact on system performance.
The connection pooling is implemented using the Spring-LDAP framework. Below are all the properties that it is possible to configure. These follow the semantics of the properties possible for Spring-LDAP and are described here [83].
# ----------------------- # LDAP CONNECTION POOLING # ----------------------- # Options= # nothing filled in: no connection pooling # 'jdk': use the default jdk pooling mechanism # 'spring': use the spring ldap connection pooling facilities. These can be configured further below #ldap.synchronization.pooling.type=spring # Following settings follow the semantics of org.springframework.ldap.pool.factory.PoolingContextSource #ldap.synchronization.pooling.minIdle=0 #ldap.synchronization.pooling.maxIdle=8 #ldap.synchronization.pooling.maxActive=0 #ldap.synchronization.pooling.maxTotal=-1 #ldap.synchronization.pooling.maxWait=-1 # Options for exhausted action: fail | block | grow #ldap.synchronization.pooling.whenExhaustedAction=block #ldap.synchronization.pooling.testOnBorrow=false #ldap.synchronization.pooling.testOnReturn=false #ldap.synchronization.pooling.testWhileIdle=false #ldap.synchronization.pooling.timeBetweenEvictionRunsMillis=-1 #ldap.synchronization.pooling.minEvictableIdleTimeMillis=1800000 #ldap.synchronization.pooling.numTestsPerEvictionRun=3 # Connection pool validation (see http://docs.spring.io/spring-ldap/docs/2.0.2.RELEASE/reference/#pooling for semantics) # Used when any of the testXXX above are set to true #ldap.synchronization.pooling.validation.base= #ldap.synchronization.pooling.validation.filter= # Search control: object, oneLevel, subTree #ldap.synchronization.pooling.validation.searchControlsRefs=
To enable authentication via LDAP or AD, set the following property:
ldap.authentication.enabled=true
In some organizations, a case insensitive log in is allowed with the LDAP ID. By default, this is disabled. To enable, set following property to false.
ldap.authentication.casesensitive=false
Next, a property ldap.authentication.dnPattern can be set:
ldap.authentication.dnPattern=uid={0},ou=users,dc=alfresco,dc=com
However, if the users are in structured folders (organizational units for example), a direct pattern cannot be used. In this case, leave the property either empty or comment it out. Now, a query will be performed using the ldap.synchronization.personQuery (see below) with the ldap.synchronization.userIdAttributeName to find the user and their distinguished (DN) name. That DN will then be used to sign in.
When using Active Directory, two additional properties need to be set:
ldap.authentication.active-directory.enabled=true ldap.authentication.active-directory.domain=alfresco.com
The first property enables Active Directory support and the second property is the domain of the user ID (that is, userId@domain) to sign in using Active Directory.
If the domain does not match with the rootDn, it is possible to set is explicitly:
ldap.authentication.active-directory.rootDn=DC=somethingElse,DC=com
And also the filter that is used (which defaults to a userPrincipalName comparison) can be changed:
ldap.authentication.active-directory.searchFilter=(&(objectClass=user)(userPrincipalName={0}))
ldap.allow.database.authenticaion.fallback=true
The synchronization component will periodically query the IDM system and change the user and group database. There are two synchronization modes: full and differential.
Full synchronization queries all data from the IDM and checks every user, group, and membership to be valid. The resource usage is heavier than the differential synchronization in this type of synchronization and therefore, it is usually only triggered on the very first sync when Alfresco Process Services starts up and is configured to use an external IDM. This is so that all users and groups are available in the database.
To enable full synchronization:
The frequency in which it runs is set using a cron expression:
ldap.synchronization.full.enabled=true ldap.synchronization.full.cronExpression=0 0 0 * * ?
Differential synchronization is lighter, in terms of performance, as it only queries the users and groups that have changed since the last synchronization. One downside is that it cannot detect deletions of users and groups. Consequently, a full synchronization needs to run periodically (but less than a differential synchronization typically) to account for these deletions.
ldap.synchronization.differential.enabled=true ldap.synchronization.differential.cronExpression=0 0 */4 * * ?
Do note that all synchronization results are logged, both in the regular logging and in a database table named IDM_SYNC_LOG
The synchronization logic builds on two elements:
Queries that return the correct user/group/membership data
A mapping of LDAP attributes to attributes used within the Alfresco Process Services system
There are a lot of properties to configure, so do base your configuration on one of the two files in the META-INF folder, as these contain default values. You only need to add the specific properties to your custom configuration file if the default values are not appropriate.
These are settings that are generic or shared between user and group objects. For each property, an example setting of a regular LDAP system (that is, ApacheDS) and Active Directory is shown.
Property | Description | LDAP Example | Active Directory Example |
---|---|---|---|
ldap.synchronization.distinguishedNameAttributeName |
The attribute that is the disinguished name in the system. |
dn |
dn |
ldap.synchronization.modifyTimestampAttributeName |
The name of the operational attribute recording the last update time for a group or user. Important for the differential query. |
modifyTimestamp |
whenChanged |
ldap.synchronization.createTimestampAttributeName |
The name of the operational attribute recording the create time for a group or user. Important for the differential query. |
createTimestamp |
whenCreated |
ldap.synchronization.timestampFormat |
The timestamp format. This is specific to the directory servers and can vary. |
yyyyMMddHHmmss.SSS’Z' |
yyyyMMddHHmmss'.0Z' |
ldap.synchronization.timestampFormat.locale.language |
The timestamp format locale language for parsing. Follows the java.util.Locale semantics. |
en |
en |
ldap.synchronization.timestampFormat.locale.country |
The timestamp format locale country. Follows the java.util.Locale semantics. |
GB |
GB |
ldap.synchronization.timestampFormat.timezone |
The timestamp format timezone. Follows the java.text.SimpleDateFormat semantics. |
GMT |
GMT |
Property | Description | LDAP Example | Active Directory Example |
---|---|---|---|
ldap.synchronization.users.ignoreCase | If this property is set to true then the synchronization will ignore the case that users are stored in within the source database when syncing users. | ||
ldap.synchronization.userSearchBase | The user search base restricts the LDAP user query to a sub section of a tree on the LDAP server. | ou=users,dc=alfresco,dc=com | ou=users,dc=alfresco,dc=com |
ldap.synchronization.syncAdditionalUsers | Set to true if users outside of the userSearchBase but included in the groupSearchBase should be synchronized. | false | false |
ldap.synchronization.personQuery | The query to select all objects that represent the users to import (used in the *full synchronization query*ß). | (objectclass\=inetOrgPerson) | (&(objectclass\=user)(userAccountControl\:1.2.840.113556.1.4.803\:\=512)) |
ldap.synchronization.personDifferentialQuery | The query to select objects that represent the users to import that have changed since a certain time (used in the differential synchronization query). | ||
ldap.synchronization.userIdAttributeName | The attribute name on people objects found in LDAP to use as the user ID in Alfresco | uid | cn |
ldap.synchronization.userFirstNameAttributeName | The attribute on person objects in LDAP to map to the first name property of a user | givenName | givenName |
ldap.synchronization.userLastNameAttributeName | The attribute on person objects in LDAP to map to the last name property of a user | sn | cn |
ldap.synchronization.userEmailAttributeName | The attribute on person objects in LDAP to map to the email property of a user | ||
ldap.synchronization.userType | The person type in the directory server. | inetOrgPerson | user |
You can configure which users should be made administrators in the system. Delimit multiple entries with a ; (Semi-colon) as commas can’t be used.
ldap.synchronization.tenantAdminDn=uid=joram,ou=users,dc=alfresco,dc=com;uid=tijs,ou=users,dc=alfresco,dc=comWhen using multi-tenancy, the administrator of all tenants can be configured as follows. Similar rules for delimiting apply as above.
ldap.synchronization.tenantManagerDn=uid=joram,ou=users,dc=alfresco,dc=comIt’s important to set at least 1 user with admin rights. Otherwise no user will be able to sign into the system and administer it.
Property |
Description |
LDAP Example |
Active Directory Example |
ldap.synchronization.groupSearchBase |
The group search base restricts the LDAP group query to a sub section of a tree on the LDAP server. |
ou=groups,dc=alfresco,dc=com |
ou=groups,dc=alfresco,dc=com |
ldap.synchronization.groupQuery |
The query to select all objects that represent the groups to import (used in full synchronization). |
(objectclass\=groupOfNames) |
(objectclass\=group) |
ldap.synchronization.groupDifferentialQuery |
The query to select objects that represent the groups to import that have changed since a certain time (used in the differential synchronization). |
||
ldap.synchronization.groupIdAttributeName |
The attribute on LDAP group objects to map to the authority name property in Alfresco Process Services. |
cn |
cn |
ldap.synchronization.groupMemberAttributeName |
The attribute in LDAP on group objects that defines the DN for its members. This is an important setting as is defines group memberships of users and parent-child relations between groups. |
member |
member |
ldap.synchronization.groupType |
The group type in LDAP. |
groupOfNames |
group |
Process Services provides the capability to configure the number of group members retrieved per query subject to the limitations imposed by Active Directory. Follow these steps to enable this:
ldap.synchronization.groupMemberRangeEnabled=true
ldap.synchronization.groupMemberRangeSize=1500
It is possible to use paging when connecting to an LDAP server (some even mandate this).
To enable paging when fetching users or groups, set following properties:
ldap.synchronization.paging.enabled=true ldap.synchronization.paging.size=500
By default, paging is disabled.
It is possible to tweak the batch size when doing an LDAP sync.
The insert batch size limits the amount of data being inserted in one transaction (for example, 100 users per transactions are inserted). By default, this is 5. The query batch size is used when data is fetched from the Alfresco Process Services database (for example, fetching users to check for deletions when doing a full sync).
ldap.synchronization.db.insert.batch.size=100 ldap.synchronization.db.query.batch.size=100
You can connect Process Services to external content systems and publish content as part of a process. With Alfresco Content Services it is also possible to retrieve and update content, as well as invoke certain repository actions.
Process Services: can connect to the following content systems:
It is also possible to retrieve and update content properties in an Alfresco Content Services repository as well as invoking content actions as part of a process using the following BPMN elements:
There are three ways to configure a connection to Alfresco Content Services:
Prerequisites:
Configuring a repository:
Setting | Description |
---|---|
Name | A name for the repository connection. |
Alfresco tenant | The tenant to create the repository under. |
Repository base URL | The base URL of the repository instance to connect to. |
Share base URL | The base URL of Share for the repository instance to connect to. |
Alfresco version | The version of Alfresco Content Services to connect to. This must be version 6.1.1 or later to use SSO. |
Authentication type | The authentication type of the connection. Select Identity Service authentication to use SSO. |
User authorization:
After a repository connection has been configured to use SSO users need to authorize their Alfresco Content Services credentials for use by Process Services by doing the following:
Token expiry:
If a user's authorization token expires whilst they have Alfresco Content Services tasks assigned to them they will stay in a pending state until the user reauthorizes against the repository.
Property | Description | Example |
---|---|---|
alfresco.content.sso.enabled | Sets whether SSO is enabled between Process Services and Alfresco Content Services. | ${keycloak.enabled} |
alfresco.content.sso.client_id | The Client ID within the realm that points to Process Services | ${keycloak.resource} |
alfresco.content.sso.client_secret | The secret key for the Process Services client. | ${keycloak.credentials.secret} |
alfresco.content.sso.realm | The realm that is configured for the Alfresco Content Services and Process Services clients. | ${keycloak.realm} |
alfresco.content.sso.scope | Sets the duration that tokens are valid for. For example using the valueoffline_access a token is valid even after a user logs out as long as the token is used at least once every 30 days. See the Keycloak documentation [101] for further information. | offline_access |
alfresco.content.sso.javascript_origins | The base URL for the Javascript origins of the Process Services instance. | http://localhost:9999 |
alfresco.content.sso.auth_uri | The authorization URL. | ${keycloak-auth-server-url}/realms/${alfresco.content.sso.realm}/protocol/openid-connect/auth |
alfresco.content.sso.token_uri | The authorization token URL. | ${keycloak-auth-server-url}/realms/${alfresco.content.sso.realm}/protocol/openid-connect/token |
alfresco.content.sso.redirect_uri | The redirect URI for authorization. The value in the example column needs to be updated with the correct base URL for the Process Services instance. | http://localhost:9999/activiti-app/rest/integration/sso/confirm-auth-request |
Configuring a repository:
Properties:
The following properties need to be set in activiti-app.properties to encrypt Alfresco Content Services user credentials:
Property | Description |
---|---|
security.encryption.ivspec | A 128-bit initialization vector to encrypt credentials using AES/CBC/PKCS5PADDING. This will be 16 characters long. |
security.encryption.secret | A 128-bit secret key to encrypt credentials using AES/CBC/PKCS5PADDING. This will be 16 characters long. |
Configuring a repository:
Setting | Description |
---|---|
Name | A name for the repository connection. |
Alfresco tenant | The tenant to create the repository under. |
Repository base URL | The base URL of the repository instance to connect to. |
Share base URL | The base URL of Share for the repository instance to connect to. |
Alfresco version | The version of Alfresco Content Services to connect to. |
Authentication type | The authentication type of the connection. Select Default authentication to use basic authentication. |
User authorization:
After a repository connection has been configured for basic authentication, users need to enter their Alfresco Content Services credentials for use by Process Services by doing the following:
A Box developer account [102]is required to setup a connection to Box.
The following properties need to be set in the activiti-app.properties file to enable Box connections to be used in Process Services:
Property | Description | Example |
---|---|---|
box.disabled | Set this to true to enable Box connections to be configured in forms and processes. | false |
box.web.auth_uri | Set this to the value provided in the example column to configure the Box authentication URI. | https://app.box.com/api/oauth2/authorize |
box.web.token_uri= | Set this to the value provided in the example column to configure the Box token URI. | https://app.box.com/api/oauth2/token |
box.web.redirect_uris | Update the base of the URL provided in the example column to reflect your Process Services installation. | http://localhost:8080/activiti-app/app/rest/integration/box/confirm-auth-request |
box.web.javascript_origins | Sets the base URL of Javascript origins. | http://localhost:8080/activiti-app |
box.web.client_id | The client ID obtained from your Box developer account. | |
box.web.client_secret | The client secret obtained from your Box developer account. |
A Google developer account [103]is required to setup a connection to Google Drive.
The following properties need to be set in the activiti-app.properties file to enable Google Drive connections to be used in Process Services:
Property | Description | Example |
---|---|---|
googledrive.web.disabled | Set this to true to enable Google Drive connections to be configured in forms and processes. | false |
googledrive.web.auth_uri | Set this to the value provided in the example column to configure the Google Drive authentication URI. | https://accounts.google.com/o/oauth2/auth |
googledrive.web.token_uri | Set this to the value provided in the example column to configure the Google Drive token URI. | https://accounts.google.com/o/oauth2/token |
googledrive.web.auth_provider_x509_cert_url | Set this to the value provided in the example column to configure the Google Drive x509 certificate URL. | https://www.googleapis.com/oauth2/v1/certs |
googledrive.web.redirect_uris | Update the base of the URL provided in the example column to reflect your Process Services installation. | http://localhost:8080/activiti-app/app/rest/integration/google-drive/confirm-auth-request |
googledrive.web.javascript_origins | Sets the base URL of Javascript origins. | http://localhost:8080/activiti-app |
googledrive.web.client_id | The client ID obtained from your Google developer account. | |
googledrive.web.client_secret | The client secret obtained from your Google developer account. | |
googledrive.web.client_email | The client email associated to your Google developer account. | |
googledrive.web.client_x509_cert_url | The client x509 certificate URL obtained from your Google developer account. |
By default, Alfresco Process Services is configured in a way that process modelers have access to all powerful features of the Process Engine. In many organizations this is not a problem, as the people who are modeling are trusted IT people or business analysts.
However, some organizations may expose the modeling tools of Alfresco Process Services directly to all end users giving them access to the full array of its capabilities. In such a scenario, some users may gather sensitive data or swamp the resources of the servers. Therefore, various validators are introduced that can be enabled or disabled, when required. These validators are run before a process model is deployed to the engine and will block deployment in case of a validation error.
The following validators disable the usage of certain tasks. The various validators are configured through the regular Alfresco Process Services properties. The default value for these validators is false'. Set the property to 'true to enable the validator.
Disables the usage of the timer, signal, message or error start event in a process definition.
Disables the usage of the script task in a process definition. Disabling script tasks is typically something you’ll want to do when exposing the modeling tools to end users. Scripts, contrary to the service tasks, don’t need any class on the classpath to be executed. As such, it’s very easy with scripts to execute code with bad intentions.
Disables the usage of the service task in a process definition. Service tasks are used to call custom logic when the process instance executes the service task. A service task is configured to either use a class that needs to be put on the classpath or an expression. This setting disables the usage of service tasks completely.
Disables the possibility to define execution listeners in a BPMN process definition. Execution listeners allow to add custom logic to the process diagram that is not visible in the diagram. This setting also disables task listeners on tasks.
Disables the mail task that is used for sending emails.
Disables the usage of all intermediate throw events: none, signal, message, error. They can be used to create infinite loops in processes.
Disables the usage of the manual task task in a process definition.
Disables the usage of the business rule task in a process definition.
Disables the usage of the Camel task in a process definition. Camel tasks can interact with Apache Camel for various system integrations and have, like regular JavaDelegate classes access to the whole engine.
Disables the usage of the Mule task in a process definition. Mule tasks are used to interact with a Mule server.
The following validators don’t disable a task as a whole, but rather a feature:
validator.editor.bpmn.disable.startevent.timecycle: Allows the usage of a timer start event, but not with a timeCycle attribute, as it could be used to create process instances or tasks for many people very quickly, or simply to stress the system resources.
validator.editor.bpmn.limit.servicetask.only-class: Limits the service task to only be configured with a class attribute (so no expression or delegate expression is allowed). Since the available classes are restricted by what is on the classpath, there is a strict control over which logic is exposed.
validator.editor.bpmn.limit.usertask.assignment.only-idm: Limits the user task assignment to only the values that can be selected using the Identity Store option in the assignment pop-up. The reasoning to do this, is that this is the only way safe values can be selected. Otherwise, by allowing fixed values like expression, a random bean could be invoked or used to get system information.
validator.editor.bpmn.disable.loopback: Disables looping back with a sequence flow from an element to itself. If enabled, it is possible this way to create infinite loops (if not applied correctly).
validator.editor.bpmn.limit.multiinstance.loop: Limits the loop functionality of a multi-instance: only a loop cardinality between 1 and 10 is allowed and a collection nor completion condition is allowed. So basically, only very simple loops are permitted. Currently applied to call activities, sub processes and service tasks.
validator.editor.dmn.expression: Validates the expressions in the decision tables to be correct according to the DMN specification. By default this is true (unlike the others!). This means that by default, the DMN decision tables are checked for correctness. If using the structured expression editor to fill in the decision tables, the resulting expressions will be valid. However,if you want to type any MVEL expressions, this property needs to be set to _false.
If you start up the application without a license, it will enter read only mode; however, you can upload a license from the user interface at a later stage. In this situation, use the following configuration properties to configure the license.
Property |
Description |
Default |
license.multi-tenant |
If no license is available on first bootstrap this property decides if system will go into single or multi-tenant mode. |
false |
license.default-tenant |
If no license is available on first bootstrap this property decides the name of the default tenant. |
tenant |
license.allow-upload |
Decides if license uploads should be allowed in the system or not. |
true |
It is possible to run Alfresco Process Services in so-called "multi-schema multi-tenancy" mode (MS-MT). This is a multi-tenant setup, where every tenant has its own database schema. This means that the data of one tenant is completely separated from the data of other tenants.
This is an alternative to the "regular" multi-tenant mode, where the data of all tenants is stored in the same database schema and the data gets a "tenant tag" to identity which tenant the data belongs to. The following diagram shows this setup:
The main benefit of this setup is the ease of setup and configuration: there is no difference with setting up a single-tenant or multi-tenant. Each request can be handled by any node and the loadbalancer simply can route using simple routing algorithms.
The downside of this setup is clearly that the database can become the bottleneck if it has to hold all the data of all tenants and there is no "physical separation" of the tenant data.
The MS-MT setup looks as follows:
The most important benefit of this approach is that the data of each tenant is completely separated from the data of other tenants. Since only data of one tenant is stored in the database schema, queries will generally be more performant.
The downside of this approach is immediately visible in this diagram: each node needs to have a connection pool to the database schema of the tenant. With many tenants, this can mean quite a bit of "housekeeping" that will need to be performed compared to the previous approach (which can be negative for performance). Note that there is a "master database" or "primary database" in this diagram. This database stores the configurations of the tenant data sources and the mapping between user and tenant.
Alternatively, as shown in the following diagram, it is possible to configure the Suite nodes as such that they only manage a certain list of tenants (for example in the picture below the last node only manages tenant Z, and the first two manage tenant A and B, but not Z). Although this alleviates the downside of the previous setup, it does come with an extra cost: the load balancer now needs to be more intelligent and needs to route the incoming request to the appropriate node. This means that the request needs information to differentiate as to which tenant the request is coming from. This needs custom coding on the client side and is not by default available in the Alfresco Process Services web client.
Taking this to the extreme, it is possible to have one (or more nodes) for one tenant. However, in that case it is probably easier to run a single tenant Alfresco Process Services for each tenant. The remarks about the load balancer and enriching the request with tenant information as in the previous setup still apply.
Currently, following known limitations apply to the multi-schema multi-tenancy (MS-MT) feature:
As with regular multi-tenancy, it is not possible to configure the out of the box LDAP synchronization to synchronize users to different tenants.
The tenant can only be configured through the REST API, not via the "identity management" app.
Users need to be created by a user that is a "Tenant Administrator", not a "Tenant Manager".
Updating a tenant configuration (more specifically: switching the data source) cannot be done dynamically, a restart of all nodes is required for it to be picked up.
A user id needs to be unique across all tenants (cft. an email). This is because a mapping {user id, tenant id} will be stored in the primary database to determine the correct tenant data source.
This section describes how the MS-MT feature works and can be skipped if only interested in setting up an MS-MT Alfresco Process Services.
The MS-MT feature depends on this fundamental architecture:
There is one "primary datasource"
The configurations of the tenants is stored here (for example their data source configuration).
The user to tenant mapping is stored here (although this can be replaced by custom logic).
The "Tenant Manager" user is stored here (as this user doesn’t belong to any tenant).
There are x data sources
The tenant specific data is stored here.
For each tenant, a datasource configuration similar to a single tenant datasource configuration needs to be provided.
For each tenant datasource, a connection pool is created.
When a request comes in, the tenant is determined.
A tenant identifier is set to a threadlocal (making it available for all subsequent logic executed next by that thread).
The com.activiti.database.TenantAwareDataSource switched to the correct tenant datasource based on this threadlocal.
The following diagram visualizes the above points: when a request comes in, the security classes for authentication (configured using Spring Security) will kick in before executing any logic. The request contains the userId. Using this userId, the primary datasource is consulted to find the tenantId that corresponds with it (note: this information is cached in a configurable way so the primary datasource is not hit on every request. But it does mean that user removals from a tenant can take a configurable amount of time to be visible on all nodes). This does mean that in MS-MT mode, there is a (very small) overhead on each request which isn’t there in the default mode.
The tenantId is now set on a threadlocal variable (mimicking how Spring Security and its SecurityContext works). If the value is ever needed, it can be retrieved through the com.activiti.security.SecurityUtils.getCurrentTenantId() method.
When the logic is now executed, it will typically start a new database transaction. In MS-MT mode, the default DataSource implementation is replaced by the com.activiti.database.TenantAwareDataSource class. This implementation returns the datasource corresponding with the tenantId value set on the threadlocal. The logic itself remains unchanged.
The MS-MT feature does have a technical impact on some other areas too:
All default caches (process, forms, apps, script files, …) cache based on the db id as key. In MS-MT mode, the db id is not unique over tenants and the cache switches to a cache per tenant implementation.
Event processing (for analytics) by default polls the database for new events which needs to be sent to Elastic Search. In MS-MT mode, the events for each tenant datasource are polled.
The Process Engine job executor (responsible for timers and async continuations) polls the database for new jobs to execute. In MS-MT mode, this polling needs to happen for each tenant datasource.
The Hibernate id generator keeps by default a pool of identifiers for each entity primary key in memory. Hibernate keeps the lastest id stored in a database table. In MS-MT mode however, there should be a pool for each tenant and the id generator needs to use the correct tenant datasource for refreshing the pool of ids.
A similar story applies for the Process Engine id generator.
To run Alfresco Process Services, you need to have installed a multi-tenant license. Switching to MS-MT mode is done setting the tenancy.model property to isolated.
tenancy.model=isolated
When using MS-MT, there always needs to be a primary datasource. This datasource is configured exactly the same as when configuring the single datasource. For example when using a Mysql database:
datasource.url=jdbc:mysql://127.0.0.1:3306/primary-activiti?characterEncoding=UTF-8 datasource.driver=com.mysql.jdbc.Driver datasource.username=alfresco datasource.password=alfresco hibernate.dialect=org.hibernate.dialect.MySQLDialect
Booting up Alfresco Process Services now will create the regular tables in the primary-activiti schema, plus some additional tables specific to the primary datasource (such tables are prefixed with MSMT_). A default user with tenant manager capabilities is created (the login email and password can be controlled with the admin.email and admin.passwordHash properties) too.
One thing to remember is that there are no REST endpoints specific to MS-MT. All the existing tenant endpoints simply behave slightly different when running in MSMT mode. Using this tenant manager user (credentials in the basic auth header), it is now possible to add new tenants by calling the REST API:
POST http://your-domain:your-port/activiti-app/api/enterprise/admin/tenants
with the following JSON body:
{ "name" : "alfresco", "configuration" : "tenant.admin.email=admin@alfresco.com\n datasource.driver=com.mysql.jdbc.Driver\n datasource.url=jdbc:mysql://127.0.0.1:3306/tenant-alfresco?characterEncoding=UTF-8\n datasource.username=alfresco\n datasource.password=alfresco" }
Note that in some databases such as postgres, you may need to set the database.schema or database.catalog for database who work with catalogs.
Note the \n in the body of the configuration property.
Also note that this configuration will be stored encrypted (using the security.encryption.secret secret).
This will:
Create a tenant named alfresco.
Data of this tenant is stored in the database schema tenant-alfresco.
A default tenant administrator user with the email login admin@alfresco.com is created, with the default password admin (this can be changed after log in).
When executing this request, in the logs you will see the tenant being created in MSMT mode:
INFO com.activiti.msmt.MsmtIdmService - Created tenant 'alfresco' in primary datasource (with id '1')
In the logs, you’ll see:
The datasource connection pool for this tenant being created.
The Liquibase logic creating the correct tables.
At the end, you’ll see the following message indicating all is ready:
INFO com.activiti.msmt.MsmtIdmService - Created tenant 'alfresco' in tenant datasource (with id '1') INFO com.activiti.msmt.MsmtIdmService - Registered new user 'admin@alfresco.com' with tenant '1'
You can now log in into the web UI using admin@alfresco.com/admin, change the password and add some users. These users can of course also be added via the REST API using the tenant admin credentials.
A new tenant can easily be added in a similar way:
POST http://your-domain:your-port/activiti-app/api/enterprise/admin/tenants
with body
{ "name" : "acme", "configuration" : "tenant.admin.email=admin@acme.com\n datasource.driver=com.mysql.jdbc.Driver\n datasource.url=jdbc:mysql://127.0.0.1:3306/tenant-acme?characterEncoding=UTF-8\n datasource.username=alfresco\n datasource.password=alfresco" }
When the tenant admin for this tenant, admin@acme.com logs in, no data of the other one can be seen (as is usual in multi-tenancy). Also when checking the tenant-alfresco and tenant_acme schema, you’ll see the data is contained to the tenant schema.
The tenant manager can get a list of all tenants:
GET http://your-domain:your-port/activiti-app/api/enterprise/admin/tenants
[ { "id": 2, "name": "acme" }, { "id": 1, "name": "alfresco" } ]
To get specific information on a tenant, including the configuration:
GET http://your-domain:your-port/activiti-app/api/enterprise/admin/tenants/1
which gives:
{ "id": 1, "name": "alfresco", "created": "2016-04-27T09:22:33.511+0000", "lastUpdate": null, "domain": null, "active": true, "maxUsers": null, "logoId": null, "configuration": "tenant.admin.email=admin@alfresco.com\n datasource.driver=com.mysql.jdbc.Driver\n datasource.url=jdbc:mysql://127.0.0.1:3306/tenant-alfresco?characterEncoding=UTF-8\n datasource.username=alfresco\n datasource.password=alfresco" }
Assuming a multi-node setup: when creating new tenants, the REST call is executed on one particular node. After the tenant is successfully created, users can log in and use the application without any problem on any node (so the loadbalancer can simply randomly distribute for example). However, some functionality that depends on backgrounds threads (the job executor, for example) will only start after a certain period of time since the creation of the tenant on another node.
This period of time is configured via the msmt.tenant-validity.cronExpression cron expression (by default every 10 minutes).
Similarly, when a tenant is deleted, the deletion will happen on one node. It will take a certain amount of time (also configured through the msmt.tenant-validity.cronExpression property) before the deletion has rippled through all the nodes in a multi-node setup.
Note that tenant datasource configuration are not automatically picked up and require a reboot of all nodes. However, changing the datasource of a tenant should happen very infrequently.
There are some configuration properties specific to MS-MT:
tenancy.model : possible values are shared (default if omitted) or isolated. Isolated switched a multi-tenant setup to MS-MT.
msmt.tenant-validity.cronExpression : the cron expression that determines how often the validity of tenants must be checked (see previous section) (by default every 10 minutes).
msmt.async-executor.mode : There are two implementations of the Async job executor for the Activiti core engine. The default is isolated, where for each tenant a full async executor is booted up. For each tenant there will be acquire threads, a threadpool and queue for executing threads. The alternative value for this property is shared-queue, where there are acquire threads for each tenant, but the actual job execution is done by a shared threadpool and queue. This saves some server resources, but could lead to slower job processing in case there are many jobs.
msmt.bootstrapped.tenants : a semicolon separated list of tenant names. Can be used to make sure one node in a multi-node setup only takes care of the tenants in the list. Does require that the loadbalancer also uses similar logic.
Following interfaces can be used to replace the default implementations of MS-MT related functionality:
com.activiti.api.msmt.MsmtTenantResolver : used when the user authenticates and the tenant id is determined. The default implementation uses a database table (with caching) to store the user id to tenant id relationship.
com.activiti.api.msmt.MsmtUserKeyResolver : works in conjuction with the Default MsmtTenantResolver, returns the user id for a user. By default returns the email or external id (if external id is used).
com.activiti.api.datasource.DataSourceBuilderOverride : called when a tenant datasource configuration is used to create a datasource. If there is a bean on the classpath implementing this interface, the logic will be delegated to this bean to create the javax.sql.DataSource. By default, a c3p0 DataSource / connection pool will be created for the configuration.
Cross-Site Request Forgery, also referred to as CSRF, is one of the most common form of attacks plaguing web browsers. This type of attack results in a malicious request being submitted on a user’s behalf without their consent.
Typically, when the CSRF setting is enabled and an HTTP request against a web application is made, then the token values sent from the client to the server are validated to prevent unauthorized requests that were not generated by the server. The CSRF tokens are usually stored on the server and verified every time a request is sent. However, in Alfresco Process Services, this feature has been implemented slightly differently, wherein, CSRF tokens are generated on the client instead of the server and placed in a cookie CSRF-TOKEN and a header X-CSRF-TOKEN. The server side then verifies if the header and cookie values match.
Where:
X-CSRF-TOKEN = header value
CSRF-TOKEN = cookie value
This provides extra security as the cookie that belongs to Alfresco Process Services can only be accessed for pages generated or served by the Alfresco Process Services domain.
By default, the CSRF protection setting is enabled in Alfresco Process Services, however to disable it, make the following changes:
security.csrf.disabled=true
Process Services uses Logback [112] for logging.
Process Services installs with the default Logback configuration reading from <Tomcat install location>/webapps/activiti-app/WEB-INF/classes/logback.xml and the equivalent location for Process Services Administrator.
The default configuration can be overridden by placing your own logback.xml in <Tomcat install location>/lib
By default Process Services logs to the console. To log to file, edit the logging configuration file to specify a file appender and location. For example:
<appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>${LOG_DIR}/activiti-app.log</file> <append>true</append> <encoder> <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern> </encoder> </appender>
It is possible to configure Logback to rescan the configuration file for any modifications made at regular intervals, without having to restart the application server by adding the following line to your custom logback.xml file:
<configuration scan="true" scanPeriod="45 seconds">
Additional configuration options [113] are also available for customizing logging.
The Administrator app can be used to inspect and manage the data for an Alfresco Process Services Process Engine (or cluster of engines). It also is used for cluster configuration and monitoring. It is distributed as a separate web application (WAR file).
Typically, there is one single Administrator application for multiple environments (for example, development, testing, production, and so on), which is accessed by a handful of users (system administrators). Generally, it is not necessary to have multiple instances of this application running.
The Process Engine is cluster-enabled so, together with the Alfresco Process Services Administrator, a user can configure and monitor a cluster (or multiple different clusters) through a graphical user interface. The clustered engines will use the same configuration and will report metrics and status back to the Alfresco Process Services Administrator where they are displayed.
The Alfresco Process Services Administrator is distributed as a WAR (Web Application ARchive) file that can be dropped in any Java web container.
Drop the activiti-admin.war file into the web container and start the web container.
To make the application use your database, you must do the following:
Copy the correct JDBC database driver to the classpath of the web application.
Create a property file called activiti-admin.properties and make sure it is on the classpath of the web application. The properties must point to the correct environment settings. If no properties file is found on the classpath, then the WEB-INF/classes/META-INF/activiti-admin file is used by default.
The database for the Administrator app is configured using the following properties. See the Database configuration [121] section for more information about how to configure Alfresco Process Services.
For example (using MySQL):
datasource.driver=com.mysql.jdbc.Driver datasource.url=jdbc:mysql://127.0.0.1:3306/activitiadmin?characterEncoding=UTF-8 datasource.username=alfresco datasource.password=alfresco hibernate.dialect=org.hibernate.dialect.MySQLDialect
Alfresco Process ServicesAdministrator can show the process data and manage the configuration of multiple clusters. In this context a cluster is a number of Process Engines that logically belong together. Note that this does not relate to the way that these engines are architecturally set up: embedded, exposed through REST, with or without a load balancer in front, and so on.
Also note that the Administrator is capable of inspecting the information of each Process Engine (if configured correctly). It is not, therefore, solely bound to using the Process Engine in Alfresco Process Services, but to all enterprise Process Engines.
Multiple clusters can be configured and managed through the Alfresco Process Services Administrator. This is displayed in the drop-down in the top-right corner:
Each of the engines in a cluster should point to the same database schema. To access the data of a cluster, the Administrator application uses one Alfresco Process Services REST application per cluster (to avoid direct access to the database from the Administrator or potentially to manage different engine versions).
The REST API endpoints can be included in your application using the Maven artifact com.activiti.activiti-rest. It is configured in a similar way as the Administrator.
No special setup is needed when using Alfresco Process Services, as it contains the necessary REST API endpoints out of the box.
As shown in the diagram below, any cluster can consist of multiple engine nodes (pointing to the same database schema), the data that is managed in the Administrator is fetched through an Alfresco Process Services REST application only.
In the same drop-down as shown above, a new cluster can be created. Note that a user will be created when doing so. This user is configured with the role of cluster manager and is used to send information to the HTTP REST API of the Administrator application, but it cannot log in into the Administrator application as a regular user for safety reasons.
The REST endpoint for each cluster can be configured through the Administrator. Simply change the settings for the endpoint on the Configuration > Engine page while the cluster of choice is selected in the drop-down in the top-right corner. The current endpoint configuration is also shown on this page:
The Process Engine and the Administrator app communicate through HTTP REST calls. To send or get information from the Administrator app, you must configure the Process Engine with a correct URL and credentials.
For the engine, this can be done programmatically:
processEngineConfig.enableClusterConfig(); processEngineConfig.setEnterpriseAdminAppUrl("http://localhost:8081/activiti-admin"); processEngineConfig.setEnterpriseClusterName("development"); processEngineConfig.setEnterpriseClusterUserName("dev"); processEngineConfig.setEnterpriseClusterPassword("dev"); processEngineConfig.setEnterpriseMetricSendingInterval(30);
This configures the base HTTP API URL, the name of the cluster that the engine is part of, the credentials of the user allowed to send data to the API and the time interval between sending data to the Administrator application (in seconds).
Alfresco Process Services includes the Process Engine. To enable engine clustering you can set the properties (similar to the programmatical approach) directly in the configuration file:
cluster.enable=true cluster.config.adminapp.url=http://localhost:8081/activiti-admin cluster.config.name=development cluster.config.username=dev cluster.config.password=dev cluster.config.metricsendinginterval=30
Alfresco Process Services also sends extra metrics to the Administrator application. To configure the rate of sending, a cron expression can be set (by default the same as the rate of sending for the Process Engine):
cluster.config.app.metricsendingcronexpression=0/30 * * * * ?
Alternatively, you can generate a jar file with these settings through the Configuration > Generate cluster jar button. If you place the jar file on the classpath (or used as a Maven dependency if using a local Maven repository) of an engine or Alfresco Process Services application, it will have precedence over the properties files.
Once the application is running, metrics for that node in the cluster are shown in the Admin application:
In the Admin application, the following two settings can be changed:
cluster.monitoring.max.inactive.time=600000 cluster.monitoring.inactive.check.cronexpression=0 0/5 * * * ?
cluster.monitoring.max.inactive.time: This a period of time, expressed in milliseconds, that indicates when a node is deemed to be inactive and is removed from the list of nodes of a cluster (nor will it appear in the monitoring section of the application). When a node is properly shut down, it will send out an event indicating it is shut down. From that point on, the data will be kept in memory for the amount of time indicated here. When a node is not properly shut down (for example, hardware failure), this is the period of time before removal, since the time the last event is received. Make sure the value here is higher than the sending interval of the nodes, to avoid that nodes incorrectly removed. By default 10 minutes.
cluster.monitoring.inactive.check.cronexpression: A cron expression that configures when the check for inactive nodes is made. When executed, this will mark any node that hasn’t been active for cluster.monitoring.max.inactive.time seconds, as an inactive node. By default, every 5 minutes.
For each cluster, a master configuration can be defined. When the instance boots up, it will request the master configuration data from the Administrator application. For this to work, the cluster.x properties (or equivalent programmatic setters) listed above need to be set correctly.
There is one additional property that can be set: cluster.master.cfg.required=. This is a boolean value, which if set to true will stop the instance from booting up when the Admin app could not be reached or no master configuration is defined. In case of false, the instance will boot up using the local properties file instead of the master configuration.
The master configuration works for both clusters of embedded Process Engines or Alfresco Process Services instances. The two can not be mixed within the same cluster though.
Note: When changing the master configuration, the cluster instances would need a reboot. The Administrator application will show a warning for that node too in the monitoring tab, saying the master configuration currently being used is incorrect.
Communication with the Administrator Application is done using HTTP REST calls. The calls use HTTP Basic Authentication for security, but do use different users, depending on the use case.
Alfresco Process Services and the Administrator Application do not share user stores because:
Typically, there are only a handful of users involved with the Administrator Application.
The Administrator Application can be used independently.
The following pictures gives a high-level overview:
The Process Engine pushes and pulls data to and from the Administrator Application REST API. These calls use basic authentication with a user defined in the Administrator Application user store (relational database). Such a user is automatically created when a new cluster configuration is created (see above), but its credentials need to be configured on the engine/Suite app side (see the cluster.xx properties.)
The Administrator Application allows you to browse and manage data in an Enterprise Process Engine. It calls the REST API to do so, using a user defined in the user store of the Suite Application (or any other authentication mechanism for the embedded engine use case).
For Alfresco Process Services: The user needs to have a Tenant Admin or Tenant Manager role, as the Administrator Application gives access to all data of the engine.
The following diagram illustrates what this means for an end user:
An end user logs in through the UI, both on the Suite as the Admin Application. Again, the user store is not shared between the two.
It’s important to understand that the HTTP REST calls done against the Suite REST API, are done using the credentials of the Suite application using a user defined in the user store of the Suite Application. This user can be configured through the Administrator Application UI.
In case of using LDAP, a equivalent reasoning is made:
The user that logs into the Administrator Application is defined in the relational database of the Administrator Application. However, the HTTP REST call will now use a user that is defined in LDAP.
When using the Process Engine embedded in a custom application (or multiple embedded engines), it is still needed to set up a REST endpoint that the Administrator application can use to communicate with to see and manage data in the engines cluster.
Alfresco Process Services already contains this REST API, so you must add this additional REST app.
Out of the box, the REST application is configured to have a default admin user for authentication and uses an in memory H2 database. The latter of course needs to be changed to point to the same database as the engines are using.
The easiest way to do this, is to change the properties in the /WEB-INF/classes/META-INF/db.properties file with the correct datasource parameters. Make sure the driver jar is on the classpath.
To change default user, change the settings in /WEB-INF/classes/META-INF/engine.properties. In the same file, you can also configure the following basic engine settings:
engine.schema.update: Indicates if the database schema should be upgraded after booting the engine (if this is needed). The default value is true.
engine.asyncexecutor.enabled: Indicates if the async job executor is enabled. By default, it is set to false, as this is better done on the engine nodes itself otherwise you would have to make sure the classpath has all the delegates used in the various processes.
engine.asyncexecutor.activate: Instructs the Process Engine to start the Async executor thread pool at startup. The default value is false.
engine.history.level: The history level of the process engine. Make sure this matches the history level in the other engines in the cluster, as otherwise this might lead to inconsistent data. The default value is audit.
If these two property files are insufficient in configuring the process engine, you can override the complete process engine configuration in a Spring xml file located at /WEB-INF/classes/META-INF/activiti-custom-context.xml. Uncomment the bean definitions and configure the engine without restrictions, similar to a normal Activiti Process Engine configuration.
The out-of-the-box datasource uses C3P0 as connection pooling framework. In the same file, you can configure this datasource and transaction manager.
The application uses Spring Security for authentication. By default, it will use the Alfresco Process Services identityService to store and validate the user. To change it, add a bean with id authenticationProvider to /WEB-INF/classes/META-INF/activiti-custom-context.xml. The class should implement the org.springframework.security.authentication.AuthenticationProvider interface (see Spring docs for multiple implementations).
Use the Administrator application to perform basic administration functions in Alfresco Process Services. For example, you can inspect the state of Process Engines, delete an app, view when an app was deployed, or monitor clusters.
The Administrator application has the following tabs:
Apps - Use for deleting apps, redeploying an app to another cluster, and downloading apps.
Deployments - View the current deployment and its content such as process definitions, deploy time, tenant information and so on.
Definitions - View process definitions and their related instances.
Instances - View running or completed process instances for each process definition. You can also see related information for each process definition, such as, tasks, variables, subprocesses, jobs, decision tables, and forms information. In addition, you can download binary process data for troubleshooting process issues.
Tasks - View tasks information and perform actions on them, such as edit, assign/claim, delegate, complete tasks. In addition, you can view task forms, sub tasks, variables, and identity links for a particular task.
Jobs - View the current job details based on its Process Instance ID, due date, and Job Id. Exceptions are displayed if the jobs failed to execute (For example, if a mail server could not be reached).
Monitoring - Enables you to monitor the cluster information.
Configuration - Add and configure cluster information. See Cluster configuration and monitoring [124] for more information.
You can deploy apps in various ways in the Administrator application. For example, you can upload and publish an app model from a zip file, deploy an existing app from one cluster to another, or redeploy an existing app model to another cluster. Deploying app models to another cluster is particularly useful when your app needs to be progressed from staging to production or copied from the development environment to production. However, when any changes made to the development environment need to be carried over to production, you should select the target cluster (the production system in this case) in the Administrator application and redeploy your app.
To upload and publish an app model from a zip file, in the Administrator application, click Apps > Publish an app model.
Prerequisite: Make sure you have configured at least two clusters. To create a new cluster, select Clusters list > Create new cluster.
To deploy an app model to a different cluster:
To redeploy an existing app to a different cluster:
To download an app:
To delete an app:
The
button appears for binary variables only as the process designer detects the underlying object
type.
The binary process data is downloaded to the local machine. The file is provided in a serialized binary format.
The new user can log in and access the Administration app but does not have the ability to make any changes.
Links:
[1] https://docs.alfresco.com/../topics/upgrading_from_a_previous_release.html
[2] https://docs.alfresco.com/../topics/multi_node_clustered_setup.html
[3] https://docs.alfresco.com/../topics/administration_application_config.html
[4] https://docs.alfresco.com/../concepts/ps-logging.html
[5] https://docs.alfresco.com/../topics/administrator_application.html
[6] https://docs.alfresco.com/../concepts/welcome.html
[7] https://docs.alfresco.com/../topics/upgrading_using_installer.html
[8] https://docs.alfresco.com/../topics/upgrading_using_war.html
[9] https://docs.alfresco.com/../topics/adminGuide.html
[10] https://docs.alfresco.com/installing_using_an_installer.html
[11] https://docs.alfresco.com/uploading_a_license_from_the_user_interface_ui.html
[12] https://www.alfresco.com/services/subscription/supported-platforms
[13] https://docs.alfresco.com/installing_using_the_war_file.html
[14] https://docs.alfresco.com/databaseConfiguration.html%23databaseConfiguration__jdbc
[15] https://docs.alfresco.com/databaseConfiguration.html%23databaseConfiguration__jndi
[16] https://docs.alfresco.com/databaseConfiguration.html%23databaseConfiguration__hibernate
[17] https://docs.alfresco.com/../topics/general_server_settings.html
[18] https://docs.alfresco.com/../tasks/ps-encryption-process-flow.html
[19] https://docs.alfresco.com/../topics/databaseConfiguration.html
[20] https://docs.alfresco.com/../topics/ps-language-support.html
[21] https://docs.alfresco.com/../concepts/is-intro.html
[22] https://docs.alfresco.com/../tasks/ps-auth-kerberos-ADconfig.html
[23] https://docs.alfresco.com/../concepts/ps-app-config-OAuth-client.html
[24] https://docs.alfresco.com/../topics/enabling-cors.html
[25] https://docs.alfresco.com/../topics/business_calendar_settings.html
[26] https://docs.alfresco.com/../concepts/login_session.html
[27] https://docs.alfresco.com/../topics/initial_user_created_on_first_start_up.html
[28] https://docs.alfresco.com/../topics/emailServerConfiguration.html
[29] https://docs.alfresco.com/../topics/elasticsearch_configuration.html
[30] https://docs.alfresco.com/../topics/application_access_and_default_example_app.html
[31] https://docs.alfresco.com/../topics/group_manager_involvement.html
[32] https://docs.alfresco.com/../topics/process_definition_cache.html
[33] https://docs.alfresco.com/../topics/contentStorageConfig.html
[34] https://docs.alfresco.com/../topics/microsoft_office_integration.html
[35] https://docs.alfresco.com/../topics/logging_backend_metrics.html
[36] https://docs.alfresco.com/../concepts/config_process_task_limit.html
[37] https://docs.alfresco.com/../topics/externalIdentityManagement.html
[38] https://docs.alfresco.com/../topics/integration_with_external_systems.html
[39] https://docs.alfresco.com/../topics/validator_configuration.html
[40] https://docs.alfresco.com/../topics/license_configuration.html
[41] https://docs.alfresco.com/../topics/multi_schema_multi_tenancy_ms_mt.html
[42] https://docs.alfresco.com/../topics/cross_site_request_forgery.html
[43] http://www.jasypt.org/download.html
[44] http://www.jasypt.org/cli.html
[45] https://www.ca.com/us/services-support/ca-support/ca-support-online/knowledge-base-articles.tec1698523.html
[46] https://www-01.ibm.com/marketing/iwm/iwm/web/reg/pick.do?source=jcesdk.
[47] http://www.programering.com/a/MjN1kTNwATg.html
[48] https://stackoverflow.com/questions/35485826/turn-off-tomcat-logging-via-spring-boot-application
[49] https://stackoverflow.com/questions/17019233/pass-user-defined-environment-variable-to-tomcat
[50] http://www-01.ibm.com/support/docview.wss?uid=swg21417365
[51] https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/welcvariables.html.
[52] http://www.mchange.com/projects/c3p0/
[53] http://mybatis.github.io/mybatis-3/
[54] https://docs.alfresco.com/identity/concepts/identity-overview.html
[55] https://docs.alfresco.com/identity/concepts/identity-deploy.html
[56] https://docs.alfresco.com/identity/concepts/identity-configure.html
[57] https://docs.alfresco.com/is-properties.html
[58] https://docs.alfresco.com/../concepts/is-properties.html
[59] https://www.keycloak.org/docs/4.8/securing_apps/index.html#_java_adapter_config
[60] http://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html
[61] http://www.example.org:8080
[62] http://localhost:8080/activiti-app
[63] http://www.elasticsearch.org/
[64] http://www.json.org/
[65] https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-low.html
[66] https://docs.alfresco.com/rebuilding_the_elasticsearch_indexes.html
[67] https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/_motivations_around_a_new_java_client.html
[68] https://docs.alfresco.com/../topics/general_settings.html
[69] https://docs.alfresco.com/../topics/event_processing_for_analytics.html
[70] https://docs.alfresco.com/../topics/rebuilding_the_elasticsearch_indexes.html
[71] https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
[72] https://docs.alfresco.com/elasticsearch_configuration.html
[73] http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html
[74] http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
[75] https://docs.alfresco.com/integration_with_external_systems.html
[76] http://support.microsoft.com/kb/2123563
[77] http://www.slf4j.org/
[78] http://logging.apache.org/log4j/
[79] https://docs.alfresco.com/../topics/configuration.html
[80] https://docs.alfresco.com/../topics/server_connection_configuration.html
[81] https://docs.alfresco.com/../topics/authentication.html
[82] https://docs.alfresco.com/../topics/synchronization.html
[83] http://docs.spring.io/spring-ldap/docs/2.0.2.RELEASE/reference/#pooling
[84] https://docs.alfresco.com/../topics/generic_synchronization_settings.html
[85] https://docs.alfresco.com/../topics/user_synchronization_settings.html
[86] https://docs.alfresco.com/../topics/group_synchronization_settings.html
[87] https://docs.alfresco.com/../concepts/adding_LDAP_users.html
[88] https://docs.alfresco.com/../topics/paging.html
[89] https://docs.alfresco.com/../topics/batch_insert.html
[90] https://msdn.microsoft.com/en-us/library/ms676302(v=vs.85).aspx
[91] https://docs.alfresco.com/../concepts/ext-acs.html
[92] https://docs.alfresco.com/../concepts/ext-box.html
[93] https://docs.alfresco.com/../concepts/ext-google.html
[94] https://docs.alfresco.com/acs-sso.html
[95] https://docs.alfresco.com/acs-basic.html
[96] https://docs.alfresco.com/../topics/shareGuide.html
[97] https://docs.alfresco.com/../concepts/acs-sso.html
[98] https://docs.alfresco.com/../concepts/acs-basic.html
[99] https://docs.alfresco.com/acs-sso-properties.html
[100] https://docs.alfresco.com/../concepts/acs-sso-properties.html
[101] https://www.keycloak.org/docs/8.0/server_admin/#_offline-access
[102] https://developers.box.com
[103] https://developers.google.com/drive/v2/reference/
[104] https://docs.alfresco.com/../topics/disabling_tasks.html
[105] https://docs.alfresco.com/../topics/limit_functionality.html
[106] https://docs.alfresco.com/../topics/known_limitations.html
[107] https://docs.alfresco.com/../topics/technical_implementation.html
[108] https://docs.alfresco.com/../topics/getting_started_MS-MT.html
[109] https://docs.alfresco.com/../topics/behavior_in_a_multi_node_setup.html
[110] https://docs.alfresco.com/../topics/configuration_properties.html
[111] https://docs.alfresco.com/../topics/pluggability.html
[112] https://logback.qos.ch
[113] http://logback.qos.ch/manual/
[114] https://docs.alfresco.com/../topics/installing_administrator.html
[115] https://docs.alfresco.com/../topics/using_administrator_application.html
[116] https://docs.alfresco.com/../topics/database_configuration.html
[117] https://docs.alfresco.com/../topics/cluster_configuration_and_monitoring.html
[118] https://docs.alfresco.com/../topics/master_configuration.html
[119] https://docs.alfresco.com/../topics/http_communication_overview.html
[120] https://docs.alfresco.com/../topics/rest_app_config.html
[121] https://docs.alfresco.com/databaseConfiguration.html
[122] https://docs.alfresco.com/../topics/administrator_arch.html
[123] https://docs.alfresco.com/../topics/configuration_settings.html
[124] https://docs.alfresco.com/cluster_configuration_and_monitoring.html
[125] https://docs.alfresco.com/../topics/deploying_apps.html
[126] https://docs.alfresco.com/../tasks/admin-app-binary-download.html
[127] https://docs.alfresco.com/../tasks/admin-app-read-only.html