5 A TripleO nested stack Heat template that encapsulates generic configuration
6 data to configure a specific service. This generally includes everything
7 needed to configure the service excluding the local bind ports which
8 are still managed in the per-node role templates directly (controller.yaml,
9 compute.yaml, etc.). All other (global) service settings go into
10 the puppet/service templates.
15 Each service may define its own input parameters and defaults.
16 Operators will use the parameter_defaults section of any Heat
17 environment to set per service parameters.
19 Apart from sevice specific inputs, there are few default parameters for all
20 the services. Following are the list of default parameters:
22 * ServiceNetMap: Mapping of service_name -> network name. Default mappings
23 for service to network names are defined in
24 ../network/service_net_map.j2.yaml, which may be overridden via
25 ServiceNetMap values added to a user environment file via
28 * EndpointMap: Mapping of service endpoint -> protocol. Contains a mapping of
29 endpoint data generated for all services, based on the data included in
30 ../network/endpoints/endpoint_data.yaml.
32 * DefaultPasswords: Mapping of service -> default password. Used to pass some
33 passwords from the parent templates, this is a legacy interface and should
34 not be used by new services.
36 * RoleName: Name of the role on which this service is deployed. A service can
37 be deployed in multiple roles. This is an internal parameter (should not be
38 set via environment file), which is fetched from the name attribute of the
39 roles_data.yaml template.
41 * RoleParameters: Parameter specific to a role on which the service is
42 applied. Using the format "<RoleName>Parameters" in the parameter_defaults
43 of user environment file, parameters can be provided for a specific role.
44 For example, in order to provide a parameter specific to "Compute" role,
55 Each service may define three ways in which to output variables to configure Hiera
56 settings on the nodes.
58 * config_settings: the hiera keys will be pushed on all roles of which the service
61 * global_config_settings: the hiera keys will be distributed to all roles
63 * service_config_settings: Takes an extra key to wire in values that are
64 defined for a service that need to be consumed by some other service.
66 service_config_settings:
69 This will set the hiera key 'foo' on all roles where haproxy is included.
74 Each service may define an output variable which returns a puppet manifest
75 snippet that will run at each of the following steps. Earlier manifests
76 are re-asserted when applying latter ones.
78 * config_settings: Custom hiera settings for this service.
80 * global_config_settings: Additional hiera settings distributed to all roles.
82 * step_config: A puppet manifest that is used to step through the deployment
83 sequence. Each sequence is given a "step" (via hiera('step') that provides
84 information for when puppet classes should activate themselves.
86 Steps correlate to the following:
88 1) Load Balancer configuration
90 2) Core Services (Database/Rabbit/NTP/etc.)
92 3) Early Openstack Service setup (Ringbuilder, etc.)
94 4) General OpenStack Services
96 5) Service activation (Pacemaker)
101 Each service template may optionally define a `upgrade_batch_tasks` key, which
102 is a list of ansible tasks to be performed during the upgrade process.
104 Similar to the step_config, we allow a series of steps for the per-service
105 upgrade sequence, defined as ansible tasks with a tag e.g "step1" for the first
106 step, "step2" for the second, etc (currently only two steps are supported, but
107 more may be added when required as additional services get converted to batched
110 Note that each step is performed in batches, then we move on to the next step
111 which is also performed in batches (we don't perform all steps on one node,
112 then move on to the next one which means you can sequence rolling upgrades of
113 dependent services via the step value).
115 The tasks performed at each step is service specific, but note that all batch
116 upgrade steps are performed before the `upgrade_tasks` described below. This
117 means that all services that support rolling upgrades can be upgraded without
118 downtime during `upgrade_batch_tasks`, then any remaining services are stopped
119 and upgraded during `upgrade_tasks`
121 The default batch size is 1, but this can be overridden for each role via the
122 `upgrade_batch_size` option in roles_data.yaml
127 Each service template may optionally define a `upgrade_tasks` key, which is a
128 list of ansible tasks to be performed during the upgrade process.
130 Similar to the step_config, we allow a series of steps for the per-service
131 upgrade sequence, defined as ansible tasks with a tag e.g "step1" for the first
132 step, "step2" for the second, etc.
134 Steps/tages correlate to the following:
136 1) Stop all control-plane services.
138 2) Quiesce the control-plane, e.g disable LoadBalancer, stop
139 pacemaker cluster: this will stop the following resource:
151 The exact order is controlled by the cluster constraints.
153 3) Perform a package update and install new packages: A general
154 upgrade is done, and only new package should go into service
157 4) Start services needed for migration tasks (e.g DB)
159 5) Perform any migration tasks, e.g DB sync commands
161 Note that the services are not started in the upgrade tasks - we instead re-run
162 puppet which does any reconfiguration required for the new version, then starts
165 Nova Server Metadata Settings
166 -----------------------------
168 One can use the hook of type `OS::TripleO::ServiceServerMetadataHook` to pass
169 entries to the nova instances' metadata. It is, however, disabled by default.
170 In order to overwrite it one needs to define it in the resource registry. An
171 implementation of this hook needs to conform to the following:
173 * It needs to define an input called `RoleData` of json type. This gets as
174 input the contents of the `role_data` for each role's ServiceChain.
176 * This needs to define an output called `metadata` which will be given to the
177 Nova Server resource as the instance's metadata.