Skip to content

Autoscale Load Balancing Rule looses VMs during a node failure. #9145

@btzq

Description

@btzq
ISSUE TYPE
  • Bug Report
COMPONENT NAME
Autoscaling, Load Balancer
CLOUDSTACK VERSION
4.19.0
CONFIGURATION
OS / ENVIRONMENT
SUMMARY

We have a few Autoscale Rules running in production. For simplicity, lets say we have 1 Autoscale Rule, with 4 VMs currently under the Rule.

2 of the VMs are located in a server that has experienced a node failure.

During the node failure, the VMs are restarted in a new node. The 2 VMs started up again in the new server. When you go to the Autoscale Load Balancing Rule, you will notice that these 2 VMs are not under the Autoscale Rule anymore. They are now orphans.

But, the autoscale rule will still state it has a total of 4 scaled up VMs

In summary,

  • Total No. Of VMs resulting from Autoscale : 4 VMs
  • No. of VMs under LB Rule: 2 VMs
  • No. Of VMs as Orphans : 2VMs
STEPS TO REPRODUCE
- Create Autoscale Rule with Multiple VMs
- Live Migrate VMs to another host
- Force Power Off the Host to simulate node failure
- Wait for VMs to restart in new Node
- Go to Autoscale Load Balancer, and display the list of VMs under the load balancing rule
- If cannot replicate the issue, retry it again until it happens. It may be intermittent.
EXPECTED RESULTS
A logic should be implemented to:
- Scale up the VMs when it has detected the node is down (to fufill the scale up requirement)
- Do not restart the VMs from the host failure to prevent confusion.
ACTUAL RESULTS
VMs are missing from the load balancing rule

Metadata

Metadata

Type

No type

Projects

Status

Todo

Relationships

None yet

Development

No branches or pull requests

Issue actions