Moving Aggregate within a cDOT Cluster from von HA-Pair to another

By | 2018-02-06

Use this commands on your own risk ūüôā

Take aggregate offline and remove the disk ownership:

cluster::*> aggr offline aggr2
cluster::*> disk removeowner 1a.20.*

Recable the shelf(s) and reassign the disks to the new node:

cluster::*> disk assign -all true -node node3

In the nodeshell, try to find the aggregate and bring it online again:

node3> aggr status
   aggr2  offline         raid_dp, aggr         raidsize=20, resyncsnaptime=60
                                foreign
                                64-bit

node3> aggr online aggr2

Back in the cluster shell we need to fix the database to bring the aggregate back again in the cluster:

cluster::*> debug vreport show -type aggregate 

aggregate Differences:

Name             Reason   Attributes

--------         -------  ---------------------------------------------------

aggr2(9c41df87-a14a-44e0-bf5f-6cab4765940b) 

                 Present in WAFL Only
                          Node Name: node3
                          Aggregate UUID: 9c41df87-a14a-44e0-bf5f-6cab4765940b
                          Aggregate State: online
                          Aggregate Raid Status: raid_dp
                          Aggregate HA Policy: sfo
                          Is Aggregate Root: false
                          Is Composite Aggregate: false
cluster::*> debug vreport fix -type aggregate -object aggr2(9c41df87-a14a-44e0-bf5f-6cab4765940b)

Here it is again:

cluster::*> aggr show aggr2

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status

--------- -------- --------- ----- ------- ------ ---------------- ------------

aggr2 
           233.1TB   41.04TB   82% online       3 nnode3           raid_dp,
                                                                   normal

Now we need to bring back online all volumes on this aggregate:

cluster::*> debug vreport show -type volume 

volume Differences:

Name             Reason   Attributes

--------         -------  ---------------------------------------------------
vserver1:vol1

                 Present in WAFL Only

                          Node Name: node3
                          Volume DSID:2661 MSID:2150082831
                          UUID: 0448e3fe-2fed-11e7-a377-00a098a2063a
                          Aggregate Name: aggr2
                          Aggregate UUID: 9c41df87-a14a-44e0-bf5f-6cab4765940b
                          Vserver UUID: 9a80db65-2feb-11e7-8c65-00a098a2058a
                          AccessType: DP_READ_ONLY
                          StorageType: REGULAR
                          Constituent Role: none
(...)

3 entries were displayed.
cluster::*> debug vreport fix -type volume -object vserver1:vol1

Voila, all volumes back again, too:

cluster::*> debug vreport show -type volume                                                                                     
There are no entries matching your query.

Info: WAFL and VLDB volume/aggregate records are consistent.
cluster::*> vol show -aggregate aggr2

Vserver   Volume       Aggregate    State      Type       Size  Available Used%

--------- ------------ ------------ ---------- ---- ---------- ---------- -----

vserver1

          vol1
                                aggr2                        online     DP         70TB     7.14TB   89%

(...)

3 entries were displayed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.