1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
4 Detailed architecture and interface specification
5 =================================================
7 This section describes a detailed implementation plan, which is based on the
8 high level architecture introduced in Section 3. Section 5.1 describes the
9 functional blocks of the Doctor architecture, which is followed by a high level
10 message flow in Section 5.2. Section 5.3 provides a mapping of selected existing
11 open source components to the building blocks of the Doctor architecture.
12 Thereby, the selection of components is based on their maturity and the gap
13 analysis executed in Section 4. Sections 5.4 and 5.5 detail the specification of
14 the related northbound interface and the related information elements. Finally,
15 Section 5.6 provides a first set of blueprints to address selected gaps required
16 for the realization functionalities of the Doctor project.
23 This section introduces the functional blocks to form the VIM. OpenStack was
24 selected as the candidate for implementation. Inside the VIM, 4 different
25 building blocks are defined (see :numref:`figure6`).
27 .. figure:: images/figure6.png
36 The Monitor module has the responsibility for monitoring the virtualized
37 infrastructure. There are already many existing tools and services (e.g. Zabbix)
38 to monitor different aspects of hardware and software resources which can be
39 used for this purpose.
44 The Inspector module has the ability a) to receive various failure notifications
45 regarding physical resource(s) from Monitor module(s), b) to find the affected
46 virtual resource(s) by querying the resource map in the Controller, and c) to
47 update the state of the virtual resource (and physical resource).
49 The Inspector has drivers for different types of events and resources to
50 integrate any type of Monitor and Controller modules. It also uses a failure
51 policy database to decide on the failure selection and aggregation from raw
52 events. This failure policy database is configured by the Administrator.
54 The reason for separation of the Inspector and Controller modules is to make the
55 Controller focus on simple operations by avoiding a tight integration of various
56 health check mechanisms into the Controller.
61 The Controller is responsible for maintaining the resource map (i.e. the mapping
62 from physical resources to virtual resources), accepting update requests for the
63 resource state(s) (exposing as provider API), and sending all failure events
64 regarding virtual resources to the Notifier. Optionally, the Controller has the
65 ability to force the state of a given physical resource to down in the resource
66 mapping when it receives failure notifications from the Inspector for that
67 given physical resource.
68 The Controller also re-calculates the capacity of the NVFI when receiving a
69 failure notification for a physical resource.
71 In a real-world deployment, the VIM may have several controllers, one for each
72 resource type, such as Nova, Neutron and Cinder in OpenStack. Each controller
73 maintains a database of virtual and physical resources which shall be the master
74 source for resource information inside the VIM.
79 The focus of the Notifier is on selecting and aggregating failure events
80 received from the controller based on policies mandated by the Consumer.
81 Therefore, it allows the Consumer to subscribe for alarms regarding virtual
82 resources using a method such as API endpoint. After receiving a fault
83 event from a Controller, it will notify the fault to the Consumer by referring
84 to the alarm configuration which was defined by the Consumer earlier on.
86 To reduce complexity of the Controller, it is a good approach for the
87 Controllers to emit all notifications without any filtering mechanism and have
88 another service (i.e. Notifier) handle those notifications properly. This is the
89 general philosophy of notifications in OpenStack. Note that a fault message
90 consumed by the Notifier is different from the fault message received by the
91 Inspector; the former message is related to virtual resources which are visible
92 to users with relevant ownership, whereas the latter is related to raw devices
93 or small entities which should be handled with an administrator privilege.
95 The northbound interface between the Notifier and the Consumer/Administrator is
96 specified in :ref:`impl_nbi`.
104 The detailed work flow for fault management is as follows (see also :numref:`figure7`):
106 1. Request to subscribe to monitor specific virtual resources. A query filter
107 can be used to narrow down the alarms the Consumer wants to be informed
109 2. Each subscription request is acknowledged with a subscribe response message.
110 The response message contains information about the subscribed virtual
111 resources, in particular if a subscribed virtual resource is in "alarm"
113 3. The NFVI sends monitoring events for resources the VIM has been subscribed
114 to. Note: this subscription message exchange between the VIM and NFVI is not
115 shown in this message flow.
116 4. Event correlation, fault detection and aggregation in VIM.
117 5. Database lookup to find the virtual resources affected by the detected fault.
118 6. Fault notification to Consumer.
119 7. The Consumer switches to standby configuration (STBY)
120 8. Instructions to VIM requesting certain actions to be performed on the
121 affected resources, for example migrate/update/terminate specific
122 resource(s). After reception of such instructions, the VIM is executing the
123 requested action, e.g. it will migrate or terminate a virtual resource.
124 a. Query request from Consumer to VIM to get information about the current
125 status of a resource.
126 b. Response to the query request with information about the current status of
127 the queried resource. In case the resource is in "fault" state, information
128 about the related fault(s) is returned.
130 In order to allow for quick reaction to failures, the time interval between
131 fault detection in step 3 and the corresponding recovery actions in step 7 and 8
132 shall be less than 1 second.
134 .. figure:: images/figure7.png
138 Fault management work flow
140 .. figure:: images/figure8.png
144 Fault management scenario
146 :numref:`figure8` shows a more detailed message flow (Steps 4 to 6) between
147 the 4 building blocks introduced in :ref:`impl_fb`.
149 4. The Monitor observed a fault in the NFVI and reports the raw fault to the
151 The Inspector filters and aggregates the faults using pre-configured
155 a) The Inspector queries the Resource Map to find the virtual resources
156 affected by the raw fault in the NFVI.
157 b) The Inspector updates the state of the affected virtual resources in the
159 c) The Controller observes a change of the virtual resource state and informs
160 the Notifier about the state change and the related alarm(s).
161 Alternatively, the Inspector may directly inform the Notifier about it.
163 6. The Notifier is performing another filtering and aggregation of the changes
164 and alarms based on the pre-configured alarm configuration. Finally, a fault
165 notification is sent to northbound to the Consumer.
169 .. figure:: images/figure9.png
173 NFVI maintenance work flow
175 The detailed work flow for NFVI maintenance is shown in :numref:`figure9`
176 and has the following steps. Note that steps 1, 2, and 5 to 8a in the NFVI
177 maintenance work flow are very similar to the steps in the fault management work
178 flow and share a similar implementation plan in Release 1.
180 1. Subscribe to fault/maintenance notifications.
181 2. Response to subscribe request.
182 3. Maintenance trigger received from administrator.
183 4. VIM switches NFVI resources to "maintenance" state. This, e.g., means they
184 should not be used for further allocation/migration requests
185 5. Database lookup to find the virtual resources affected by the detected
186 maintenance operation.
187 6. Maintenance notification to Consumer.
188 7. The Consumer switches to standby configuration (STBY)
189 8. Instructions from Consumer to VIM requesting certain recovery actions to be
190 performed (step 8a). After reception of such instructions, the VIM is
191 executing the requested action in order to empty the physical resources (step
193 9. Maintenance response from VIM to inform the Administrator that the physical
194 machines have been emptied (or the operation resulted in an error state).
195 10. Administrator is coordinating and executing the maintenance operation/work
197 a) Query request from Administrator to VIM to get information about the
198 current state of a resource.
199 b) Response to the query request with information about the current state of
200 the queried resource(s). In case the resource is in "maintenance" state,
201 information about the related maintenance operation is returned.
203 .. figure:: images/figure10.png
207 NFVI Maintenance implementation plan
209 :numref:`figure10` shows a more detailed message flow (Steps 3 to 6 and 9)
210 between the 4 building blocks introduced in Section 5.1..
212 3. The Administrator is sending a StateChange request to the Controller residing
214 4. The Controller queries the Resource Map to find the virtual resources
215 affected by the planned maintenance operation.
218 a) The Controller updates the state of the affected virtual resources in the
219 Resource Map database.
221 b) The Controller informs the Notifier about the virtual resources that will
222 be affected by the maintenance operation.
224 6. A maintenance notification is sent to northbound to the Consumer.
228 9. The Controller informs the Administrator after the physical resources have
233 Implementation plan for OPNFV Release 1
234 ---------------------------------------
239 :numref:`figure11` shows the implementation plan based on OpenStack and
240 related components as planned for Release 1. Hereby, the Monitor can be realized
241 by Zabbix. The Controller is realized by OpenStack Nova [NOVA]_, Neutron
242 [NEUT]_, and Cinder [CIND]_ for compute, network, and storage,
243 respectively. The Inspector can be realized by Monasca [MONA]_ or a simple
244 script querying Nova in order to map between physical and virtual resources. The
245 Notifier will be realized by Ceilometer [CEIL]_ receiving failure events
246 on its notification bus.
248 :numref:`figure12` shows the inner-workings of Ceilometer. After receiving
249 an "event" on its notification bus, first a notification agent will grab the
250 event and send a "notification" to the Collector. The collector writes the
251 notifications received to the Ceilometer databases.
253 In the existing Ceilometer implementation, an alarm evaluator is periodically
254 polling those databases through the APIs provided. If it finds new alarms, it
255 will evaluate them based on the pre-defined alarm configuration, and depending
256 on the configuration, it will hand a message to the Alarm Notifier, which in
257 turn will send the alarm message northbound to the Consumer. :numref:`figure12`
258 also shows an optimized work flow for Ceilometer with the goal to
259 reduce the delay for fault notifications to the Consumer. The approach is to
260 implement a new notification agent (called "publisher" in Ceilometer
261 terminology) which is directly sending the alarm through the "Notification Bus"
262 to a new "Notification-driven Alarm Evaluator (NAE)" (see Sections 5.6.2 and
263 5.6.3), thereby bypassing the Collector and avoiding the additional delay of the
264 existing polling-based alarm evaluator. The NAE is similar to the OpenStack
265 "Alarm Evaluator", but is triggered by incoming notifications instead of
266 periodically polling the OpenStack "Alarms" database for new alarms. The
267 Ceilometer "Alarms" database can hold three states: "normal", "insufficient
268 data", and "fired". It is representing a persistent alarm database. In order to
269 realize the Doctor requirements, we need to define new "meters" in the database
272 .. figure:: images/figure11.png
276 Implementation plan in OpenStack (OPNFV Release 1 ”Arno”)
279 .. figure:: images/figure12.png
283 Implementation plan in Ceilometer architecture
289 For NFVI Maintenance, a quite similar implementation plan exists. Instead of a
290 raw fault being observed by the Monitor, the Administrator is sending a
291 Maintenance Request through the northbound interface towards the Controller
292 residing in the VIM. Similar to the Fault Management use case, the Controller
293 (in our case OpenStack Nova) will send a maintenance event to the Notifier (i.e.
294 Ceilometer in our implementation). Within Ceilometer, the same workflow as
295 described in the previous section applies. In addition, the Controller(s) will
296 take appropriate actions to evacuate the physical machines in order to prepare
297 them for the planned maintenance operation. After the physical machines are
298 emptied, the Controller will inform the Administrator that it can initiate the
299 maintenance. Alternatively the VMs can just be shut down and boot up on the
300 same host after maintenance is over. There needs to be policy for administrator
301 to know the plan for VMs in maintenance.
306 This section introduces all attributes and information elements used in the
307 messages exchange on the northbound interfaces between the VIM and the VNFO and
310 Note: The information elements will be aligned with current work in ETSI NFV IFA
314 Simple information elements:
316 * SubscriptionID: identifies a subscription to receive fault or maintenance
318 * NotificationID: identifies a fault or maintenance notification.
319 * VirtualResourceID (Identifier): identifies a virtual resource affected by a
320 fault or a maintenance action of the underlying physical resource.
321 * PhysicalResourceID (Identifier): identifies a physical resource affected by a
322 fault or maintenance action.
323 * VirtualResourceState (String): state of a virtual resource, e.g. "normal",
324 "maintenance", "down", "error".
325 * PhysicalResourceState (String): state of a physical resource, e.g. "normal",
326 "maintenance", "down", "error".
327 * VirtualResourceType (String): type of the virtual resource, e.g. "virtual
328 machine", "virtual memory", "virtual storage", "virtual CPU", or "virtual
330 * FaultID (Identifier): identifies the related fault in the underlying physical
331 resource. This can be used to correlate different fault notifications caused
332 by the same fault in the physical resource.
333 * FaultType (String): Type of the fault. The allowed values for this parameter
334 depend on the type of the related physical resource. For example, a resource
335 of type "compute hardware" may have faults of type "CPU failure", "memory
336 failure", "network card failure", etc.
337 * Severity (Integer): value expressing the severity of the fault. The higher the
338 value, the more severe the fault.
339 * MinSeverity (Integer): value used in filter information elements. Only faults
340 with a severity higher than the MinSeverity value will be notified to the
342 * EventTime (Datetime): Time when the fault was observed.
343 * EventStartTime and EventEndTime (Datetime): Datetime range that can be used in
344 a FaultQueryFilter to narrow down the faults to be queried.
345 * ProbableCause: information about the probable cause of the fault.
346 * CorrelatedFaultID (Integer): list of other faults correlated to this fault.
347 * isRootCause (Boolean): Parameter indicating if this fault is the root for
348 other correlated faults. If TRUE, then the faults listed in the parameter
349 CorrelatedFaultID are caused by this fault.
350 * FaultDetails (Key-value pair): provides additional information about the
351 fault, e.g. information about the threshold, monitored attributes, indication
352 of the trend of the monitored parameter.
353 * FirmwareVersion (String): current version of the firmware of a physical
355 * HypervisorVersion (String): current version of a hypervisor.
356 * ZoneID (Identifier): Identifier of the resource zone. A resource zone is the
357 logical separation of physical and software resources in an NFVI deployment
358 for physical isolation, redundancy, or administrative designation.
359 * Metadata (Key-Value-Pairs): provides additional information of a physical
360 resource in maintenance/error state.
362 Complex information elements (see also UML diagrams in :numref:`figure13`
363 and :numref:`figure14`):
365 * VirtualResourceInfoClass:
367 + VirtualResourceID [1] (Identifier)
368 + VirtualResourceState [1] (String)
369 + Faults [0..*] (FaultClass): For each resource, all faults
370 including detailed information about the faults are provided.
372 * FaultClass: The parameters of the FaultClass are partially based on ETSI TS
373 132 111-2 (V12.1.0) [*]_, which is specifying fault management in 3GPP, in
374 particular describing the information elements used for alarm notifications.
376 - FaultID [1] (Identifier)
378 - Severity [1] (Integer)
379 - EventTime [1] (Datetime)
381 - CorrelatedFaultID [0..*] (Identifier)
382 - FaultDetails [0..*] (Key-value pair)
384 .. [*] http://www.etsi.org/deliver/etsi_ts/132100_132199/13211102/12.01.00_60/ts_13211102v120100p.pdf
386 * SubscribeFilterClass
388 - VirtualResourceType [0..*] (String)
389 - VirtualResourceID [0..*] (Identifier)
390 - FaultType [0..*] (String)
391 - MinSeverity [0..1] (Integer)
393 * FaultQueryFilterClass: narrows down the FaultQueryRequest, for example it
394 limits the query to certain physical resources, a certain zone, a given fault
395 type/severity/cause, or a specific FaultID.
397 - VirtualResourceType [0..*] (String)
398 - VirtualResourceID [0..*] (Identifier)
399 - FaultType [0..*] (String)
400 - MinSeverity [0..1] (Integer)
401 - EventStartTime [0..1] (Datetime)
402 - EventEndTime [0..1] (Datetime)
404 * PhysicalResourceStateClass:
406 - PhysicalResourceID [1] (Identifier)
407 - PhysicalResourceState [1] (String): mandates the new state of the physical
410 * PhysicalResourceInfoClass:
412 - PhysicalResourceID [1] (Identifier)
413 - PhysicalResourceState [1] (String)
414 - FirmwareVersion [0..1] (String)
415 - HypervisorVersion [0..1] (String)
416 - ZoneID [0..1] (Identifier)
418 * StateQueryFilterClass: narrows down a StateQueryRequest, for example it limits
419 the query to certain physical resources, a certain zone, or a given resource
420 state (e.g., only resources in "maintenance" state).
422 - PhysicalResourceID [1] (Identifier)
423 - PhysicalResourceState [1] (String)
424 - ZoneID [0..1] (Identifier)
428 Detailed northbound interface specification
429 -------------------------------------------
431 This section is specifying the northbound interfaces for fault management and
432 NFVI maintenance between the VIM on the one end and the Consumer and the
433 Administrator on the other ends. For each interface all messages and related
434 information elements are provided.
436 Note: The interface definition will be aligned with current work in ETSI NFV IFA
439 All of the interfaces described below are produced by the VIM and consumed by
440 the Consumer or Administrator.
442 Fault management interface
443 ^^^^^^^^^^^^^^^^^^^^^^^^^^
445 This interface allows the VIM to notify the Consumer about a virtual resource
446 that is affected by a fault, either within the virtual resource itself or by the
447 underlying virtualization infrastructure. The messages on this interface are
448 shown in :numref:`figure13` and explained in detail in the following
451 Note: The information elements used in this section are described in detail in
454 .. figure:: images/figure13.png
458 Fault management NB I/F messages
461 SubscribeRequest (Consumer -> VIM)
462 __________________________________
464 Subscription from Consumer to VIM to be notified about faults of specific
465 resources. The faults to be notified about can be narrowed down using a
470 - SubscribeFilter [1] (SubscribeFilterClass): Optional information to narrow
471 down the faults that shall be notified to the Consumer, for example limit to
472 specific VirtualResourceID(s), severity, or cause of the alarm.
474 SubscribeResponse (VIM -> Consumer)
475 ___________________________________
477 Response to a subscribe request message including information about the
478 subscribed resources, in particular if they are in "fault/error" state.
482 * SubscriptionID [1] (Identifier): Unique identifier for the subscription. It
483 can be used to delete or update the subscription.
484 * VirtualResourceInfo [0..*] (VirtualResourceInfoClass): Provides additional
485 information about the subscribed resources, i.e., a list of the related
486 resources, the current state of the resources, etc.
488 FaultNotification (VIM -> Consumer)
489 ___________________________________
491 Notification about a virtual resource that is affected by a fault, either within
492 the virtual resource itself or by the underlying virtualization infrastructure.
493 After reception of this request, the Consumer will decide on the optimal
494 action to resolve the fault. This includes actions like switching to a hot
495 standby virtual resource, migration of the fault virtual resource to another
496 physical machine, termination of the faulty virtual resource and instantiation
497 of a new virtual resource in order to provide a new hot standby resource. In
498 some use cases the Consumer can leave virtual resources on failed host to be
499 booted up again after fault is recovered. Existing resource management
500 interfaces and messages between the Consumer and the VIM can be used for those
501 actions, and there is no need to define additional actions on the Fault
502 Management Interface.
506 * NotificationID [1] (Identifier): Unique identifier for the notification.
507 * VirtualResourceInfo [1..*] (VirtualResourceInfoClass): List of faulty
508 resources with detailed information about the faults.
510 FaultQueryRequest (Consumer -> VIM)
511 ___________________________________
513 Request to find out about active alarms at the VIM. A FaultQueryFilter can be
514 used to narrow down the alarms returned in the response message.
518 * FaultQueryFilter [1] (FaultQueryFilterClass): narrows down the
519 FaultQueryRequest, for example it limits the query to certain physical
520 resources, a certain zone, a given fault type/severity/cause, or a specific
523 FaultQueryResponse (VIM -> Consumer)
524 ____________________________________
526 List of active alarms at the VIM matching the FaultQueryFilter specified in the
531 * VirtualResourceInfo [0..*] (VirtualResourceInfoClass): List of faulty
532 resources. For each resource all faults including detailed information about
533 the faults are provided.
538 The NFVI maintenance interfaces Consumer-VIM allows the Consumer to subscribe to
539 maintenance notifications provided by the VIM. The related maintenance interface
540 Administrator-VIM allows the Administrator to issue maintenance requests to the
541 VIM, i.e. requesting the VIM to take appropriate actions to empty physical
542 machine(s) in order to execute maintenance operations on them. The interface
543 also allows the Administrator to query the state of physical machines, e.g., in
544 order to get details in the current status of the maintenance operation like a
547 The messages defined in these northbound interfaces are shown in :numref:`figure14`
548 and described in detail in the following subsections.
550 .. figure:: images/figure14.png
554 NFVI maintenance NB I/F messages
556 SubscribeRequest (Consumer -> VIM)
557 __________________________________
559 Subscription from Consumer to VIM to be notified about maintenance operations
560 for specific virtual resources. The resources to be informed about can be
561 narrowed down using a subscribe filter.
565 * SubscribeFilter [1] (SubscribeFilterClass): Information to narrow down the
566 faults that shall be notified to the Consumer, for example limit to specific
567 virtual resource type(s).
569 SubscribeResponse (VIM -> Consumer)
570 ___________________________________
572 Response to a subscribe request message, including information about the
573 subscribed virtual resources, in particular if they are in "maintenance" state.
577 * SubscriptionID [1] (Identifier): Unique identifier for the subscription. It
578 can be used to delete or update the subscription.
579 * VirtualResourceInfo [0..*] (VirtalResourceInfoClass): Provides additional
580 information about the subscribed virtual resource(s), e.g., the ID, type and
581 current state of the resource(s).
583 MaintenanceNotification (VIM -> Consumer)
584 _________________________________________
586 Notification about a physical resource switched to "maintenance" state. After
587 reception of this request, the Consumer will decide on the optimal action to
588 address this request, e.g., to switch to the standby (STBY) configuration.
592 * VirtualResourceInfo [1..*] (VirtualResourceInfoClass): List of virtual
593 resources where the state has been changed to maintenance.
595 StateChangeRequest (Administrator -> VIM)
596 _________________________________________
598 Request to change the state of a list of physical resources, e.g. to
599 "maintenance" state, in order to prepare them for a planned maintenance
604 * PhysicalResourceState [1..*] (PhysicalResourceStateClass)
606 StateChangeResponse (VIM -> Administrator)
607 __________________________________________
609 Response message to inform the Administrator that the requested resources are
610 now in maintenance state (or the operation resulted in an error) and the
611 maintenance operation(s) can be executed.
615 * PhysicalResourceInfo [1..*] (PhysicalResourceInfoClass)
617 StateQueryRequest (Administrator -> VIM)
618 ________________________________________
620 In this procedure, the Administrator would like to get the information about
621 physical machine(s), e.g. their state ("normal", "maintenance"), firmware
622 version, hypervisor version, update status of firmware and hypervisor, etc. It
623 can be used to check the progress during firmware update and the confirmation
624 after update. A filter can be used to narrow down the resources returned in the
629 * StateQueryFilter [1] (StateQueryFilterClass): narrows down the
630 StateQueryRequest, for example it limits the query to certain physical
631 resources, a certain zone, or a given resource state.
633 StateQueryResponse (VIM -> Administrator)
634 _________________________________________
636 List of physical resources matching the filter specified in the
641 * PhysicalResourceInfo [0..*] (PhysicalResourceInfoClass): List of physical
642 resources. For each resource, information about the current state, the
643 firmware version, etc. is provided.
648 This section is listing a first set of blueprints that have been proposed by the
649 Doctor project to the open source community. Further blueprints addressing other
650 gaps identified in Section 4 will be submitted at a later stage of the OPNFV. In
651 this section the following definitions are used:
653 * "Event" is a message emitted by other OpenStack services such as Nova and
654 Neutron and is consumed by the "Notification Agents" in Ceilometer.
655 * "Notification" is a message generated by a "Notification Agent" in Ceilometer
656 based on an "event" and is delivered to the "Collectors" in Ceilometer that
657 store those notifications (as "sample") to the Ceilometer "Databases".
659 Instance State Notification (Ceilometer) [*]_
660 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
662 The Doctor project is planning to handle "events" and "notifications" regarding
663 Resource Status; Instance State, Port State, Host State, etc. Currently,
664 Ceilometer already receives "events" to identify the state of those resources,
665 but it does not handle and store them yet. This is why we also need a new event
666 definition to capture those resource states from "events" created by other
669 This BP proposes to add a new compute notification state to handle events from
670 an instance (server) from nova. It also creates a new meter "instance.state" in
673 .. [*] https://etherpad.opnfv.org/p/doctor_bps
675 Event Publisher for Alarm (Ceilometer) [*]_
676 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
678 **Problem statement:**
680 The existing "Alarm Evaluator" in OpenStack Ceilometer is periodically
681 querying/polling the databases in order to check all alarms independently from
682 other processes. This is adding additional delay to the fault notification
683 send to the Consumer, whereas one requirement of Doctor is to react on faults
686 The existing message flow is shown in :numref:`figure12`: after receiving
687 an "event", a "notification agent" (i.e. "event publisher") will send a
688 "notification" to a "Collector". The "collector" is collecting the
689 notifications and is updating the Ceilometer "Meter" database that is storing
690 information about the "sample" which is capured from original "event". The
691 "Alarm Evaluator" is periodically polling this databases then querying "Meter"
692 database based on each alarm configuration.
694 In the current Ceilometer implementation, there is no possibility to directly
695 trigger the "Alarm Evaluator" when a new "event" was received, but the "Alarm
696 Evaluator" will only find out that requires firing new notification to the
697 Consumer when polling the database.
699 **Change/feature request:**
701 This BP proposes to add a new "event publisher for alarm", which is bypassing
702 several steps in Ceilometer in order to avoid the polling-based approach of
703 the existing Alarm Evaluator that makes notification slow to users.
705 After receiving an "(alarm) event" by listening on the Ceilometer message
706 queue ("notification bus"), the new "event publisher for alarm" immediately
707 hands a "notification" about this event to a new Ceilometer component
708 "Notification-driven alarm evaluator" proposed in the other BP (see Section
711 Note, the term "publisher" refers to an entity in the Ceilometer architecture
712 (it is a "notification agent"). It offers the capability to provide
713 notifications to other services outside of Ceilometer, but it is also used to
714 deliver notifications to other Ceilometer components (e.g. the "Collectors")
715 via the Ceilometer "notification bus".
717 **Implementation detail**
719 * "Event publisher for alarm" is part of Ceilometer
720 * The standard AMQP message queue is used with a new topic string.
721 * No new interfaces have to be added to Ceilometer.
722 * "Event publisher for Alarm" can be configured by the Administrator of
723 Ceilometer to be used as "Notification Agent" in addition to the existing
725 * Existing alarm mechanisms of Ceilometer can be used allowing users to
726 configure how to distribute the "notifications" transformed from "events",
727 e.g. there is an option whether an ongoing alarm is re-issued or not
730 .. [*] https://etherpad.opnfv.org/p/doctor_bps
732 Notification-driven alarm evaluator (Ceilometer) [*]_
733 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
735 **Problem statement:**
737 The existing "Alarm Evaluator" in OpenStack Ceilometer is periodically
738 querying/polling the databases in order to check all alarms independently from
739 other processes. This is adding additional delay to the fault notification send
740 to the Consumer, whereas one requirement of Doctor is to react on faults as fast
743 **Change/feature request:**
745 This BP is proposing to add an alternative "Notification-driven Alarm Evaluator"
746 for Ceilometer that is receiving "notifications" sent by the "Event Publisher
747 for Alarm" described in the other BP. Once this new "Notification-driven Alarm
748 Evaluator" received "notification", it finds the "alarm" configurations which
749 may relate to the "notification" by querying the "alarm" database with some keys
750 i.e. resource ID, then it will evaluate each alarm with the information in that
753 After the alarm evaluation, it will perform the same way as the existing "alarm
754 evaluator" does for firing alarm notification to the Consumer. Similar to the
755 existing Alarm Evaluator, this new "Notification-driven Alarm Evaluator" is
756 aggregating and correlating different alarms which are then provided northbound
757 to the Consumer via the OpenStack "Alarm Notifier". The user/administrator can
758 register the alarm configuration via existing Ceilometer API [*]_. Thereby, he
759 can configure whether to set an alarm or not and where to send the alarms to.
761 **Implementation detail**
763 * The new "Notification-driven Alarm Evaluator" is part of Ceilometer.
764 * Most of the existing source code of the "Alarm Evaluator" can be re-used to
766 * No additional application logic is needed
767 * It will access the Ceilometer Databases just like the existing "Alarm
769 * Only the polling-based approach will be replaced by a listener for
770 "notifications" provided by the "Event Publisher for Alarm" on the Ceilometer
772 * No new interfaces have to be added to Ceilometer.
775 .. [*] https://etherpad.opnfv.org/p/doctor_bps
776 .. [*] https://wiki.openstack.org/wiki/Ceilometer/Alerting
778 Report host fault to update server state immediately (Nova) [*]_
779 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
781 **Problem statement:**
783 * Nova state change for failed or unreachable host is slow and does not reliably
784 state host is down or not. This might cause same server instance to run twice
785 if action taken to evacuate instance to another host.
786 * Nova state for server(s) on failed host will not change, but remains active
787 and running. This gives the user false information about server state.
788 * VIM northbound interface notification of host faults towards VNFM and NFVO
789 should be in line with OpenStack state. This fault notification is a Telco
790 requirement defined in ETSI and will be implemented by OPNFV Doctor project.
791 * Openstack user cannot make HA actions fast and reliably by trusting server
792 state and host state.
796 There needs to be a new API for Admin to state host is down. This API is used to
797 mark services running in host down to reflect the real situation.
799 Example on compute node is:
801 * When compute node is up and running:::
803 vm_state: activeand power_state: running
804 nova-compute state: up status: enabled
806 * When compute node goes down and new API is called to state host is down:::
808 vm_state: stopped power_state: shutdown
809 nova-compute state: down status: enabled
813 There is no attractive alternative to detect all different host faults than to
814 have an external tool to detect different host faults. For this kind of tool to
815 exist there needs to be new API in Nova to report fault. Currently there must be
816 some kind of workarounds implemented as cannot trust or get the states from
817 OpenStack fast enough.
819 .. [*] https://blueprints.launchpad.net/nova/+spec/update-server-state-immediately
824 This section lists some BPs related to Doctor, but proposed by drafters outside
827 pacemaker-servicegroup-driver [*]_
828 __________________________________
830 This BP will detect and report host down quite fast to OpenStack. This however
831 might not work properly for example when management network has some problem and
832 host reported faulty while VM still running there. This might lead to launching
833 same VM instance twice causing problems. Also NB IF message needs fault reason
834 and for that the source needs to be a tool that detects different kind of faults
835 as Doctor will be doing. Also this BP might need enhancement to change server
836 and service states correctly.
838 .. [*] https://blueprints.launchpad.net/nova/+spec/pacemaker-servicegroup-driver
841 vim: set tabstop=4 expandtab textwidth=80: