Moving Aggregate within a cDOT Cluster from von HA-Pair to another

By | 2018-02-06

Use this commands on your own risk 🙂

Take aggregate offline and remove the disk ownership:

cluster::*> aggr offline aggr2
cluster::*> disk removeowner 1a.20.*

Recable the shelf(s) and reassign the disks to the new node:

cluster::*> disk assign -all true -node node3

In the nodeshell, try to find the aggregate and bring it online again:

node3> aggr status
   aggr2  offline         raid_dp, aggr         raidsize=20, resyncsnaptime=60
                                foreign
                                64-bit

node3> aggr online aggr2

Back in the cluster shell we need to fix the database to bring the aggregate back again in the cluster:

cluster::*> debug vreport show -type aggregate 

aggregate Differences:

Name             Reason   Attributes

--------         -------  ---------------------------------------------------

aggr2(9c41df87-a14a-44e0-bf5f-6cab4765940b) 

                 Present in WAFL Only
                          Node Name: node3
                          Aggregate UUID: 9c41df87-a14a-44e0-bf5f-6cab4765940b
                          Aggregate State: online
                          Aggregate Raid Status: raid_dp
                          Aggregate HA Policy: sfo
                          Is Aggregate Root: false
                          Is Composite Aggregate: false
cluster::*> debug vreport fix -type aggregate -object aggr2(9c41df87-a14a-44e0-bf5f-6cab4765940b)

Here it is again:

cluster::*> aggr show aggr2

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status

--------- -------- --------- ----- ------- ------ ---------------- ------------

aggr2 
           233.1TB   41.04TB   82% online       3 nnode3           raid_dp,
                                                                   normal

Now we need to bring back online all volumes on this aggregate:

cluster::*> debug vreport show -type volume 

volume Differences:

Name             Reason   Attributes

--------         -------  ---------------------------------------------------
vserver1:vol1

                 Present in WAFL Only

                          Node Name: node3
                          Volume DSID:2661 MSID:2150082831
                          UUID: 0448e3fe-2fed-11e7-a377-00a098a2063a
                          Aggregate Name: aggr2
                          Aggregate UUID: 9c41df87-a14a-44e0-bf5f-6cab4765940b
                          Vserver UUID: 9a80db65-2feb-11e7-8c65-00a098a2058a
                          AccessType: DP_READ_ONLY
                          StorageType: REGULAR
                          Constituent Role: none
(...)

3 entries were displayed.
cluster::*> debug vreport fix -type volume -object vserver1:vol1

Voila, all volumes back again, too:

cluster::*> debug vreport show -type volume                                                                                     
There are no entries matching your query.

Info: WAFL and VLDB volume/aggregate records are consistent.
cluster::*> vol show -aggregate aggr2

Vserver   Volume       Aggregate    State      Type       Size  Available Used%

--------- ------------ ------------ ---------- ---- ---------- ---------- -----

vserver1

          vol1
                                aggr2                        online     DP         70TB     7.14TB   89%

(...)

3 entries were displayed.

2 thoughts on “Moving Aggregate within a cDOT Cluster from von HA-Pair to another

  1. Zdravko Spoljar

    this is great info … thx

    but .. how you “fix” volumes if you move between different clusters (so old svm name is not avalable) ??

    i get: Error: command failed: Vreport cannot fix a discrepancy with a missing Vserver. To remove this discrepancy, run the command “volume lost-found delete
    -node -dsid ” to delete this volume from wafl.
    there is no way to add this volume to new SVM if old name is unknown (or displayed as unknown)?
    so data migration is not possible?

    Reply
    1. Phil

      I’ve not tried this, but the procedure above is moving an aggregate within the same cluster between HA pairs. In that scenario, the original SVM should certainly still be there.

      Moving an aggr from one cluster to an entirely different cluster is something else.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.