Quantcast
Channel: Data Protection topics
Viewing all 1244 articles
Browse latest View live

SnapCenter 1.1 NFS Volume Backup with VSC Fails on New-NcSnapshotMulti

$
0
0

We have a new installation of SnapCenter 1.1 connected to our VMware vCenter server (6.0U2) and all seems well until we try to schedule and run a backup script.  I can setup the task without any issue, but everytime I attempt to run it it fails with the following error:

 

" Create snapshot 'Test_CL-VSC-7VUS01_05-20-2016_15.13.27.5960' failed: Error: Snapshot operation failed. Failed to run a command New-NcSnapshotMulti" 

 

I've veried the storage connections and users are correct and everything else seems to work just fine.  The volume I'm trying to backup is NFS based.  Any suggestions on how to fix this? 


Snap Creator with SAP HANA (SPS 10) Multitenant Database Container (MDC)

$
0
0

Dear Team

As we knew. HANA MDC still NOT support snap shot technology for consitentcy backup. so I am looking for alternative solution for HANA File-Based Backup instead but from the SnapCreator console it seem also only support Single Container setup NOT MDC. 

 

so do anyone have any idea to shares. how to make use of snap creator console to perform the backup for MDC HANA environment.

 

Best Regards,

Richy

Migration from IBM DS3512/3524 200TB Data

$
0
0

Dears, 

 

We are doing a data Migration from IBm DS3512,DS3524 with 4 Expans ions. Please suggest a best scenarion to Migrate Data to Netapp E2700 . 

 

Thanks 

Sans.

SnapCenter 1.1, SnapVault Backup Removal, and Oracle Restore

$
0
0

I'm working on getting SnapCenter up and running backing up my Oracle 12c database and have a couple issues / questions.

 

First, backups (w/ replication to SnapVault secondary), verify's, mounting backups, and cloning backups all work well.

 

My primary filer is an AFF8040 and the SnapVault secondary is a FAS2554, both running CDOT 8.3.1.  My Oracle 12c database is configured as a container database with two PDB's.

 

My first question is about clearing out old backups listed in the restore wizard from the secondary (SnapVault) unit.  The protection policy on CDOT has removed a bunch of backups (as intended) but when I go to restore a backup (and select secondary), they still show up in the list (clicking on one doesn't show the restore options as it no longer exists).  Is there a way to clean up the list of secondary backups?  I tried refreshing resources but that didn't seem to do it.

 

My second question is more of an issue.  When trying to perform a restore it fails immediately.  The error I get is:

 

"Failed to find Host 'oracle-mjh'Please make sure that managed host name is specfied or try with a fully qualified name".

 

I did specify the FQDN in the host configuration, and all systems are able to resolve it by the hostname alone.  Like I said above, backups, verifies, mounts and clones all work perfectly.  I'm only having issue doing a restore.  NetApp support is trying to help me with the issue but they haven't figured anything out yet so I figured I'd see if anyone else had the same problem and figured it out.

 

Thanks in advance!

 

-Terry

 

 

OSSV Backup is starting, then the server looks like frozen for four minutes

$
0
0

Hi Community,

I have a big problem with OSSV and maybe someone could help me?

When the OSSV Backup (snapvault all two hours) is starting, the server looks like frozen for four Minutes . I cannot start any applicatiion. After 4 Minutes all seems ok. No Errors in Event Viewer and NetApp Management Console!

OS Type    : Windows Server 2008 R2 Standard Edition Service Pack 1  

OSSV-Version: 3_0_1_2011FEB17_x64_RC  

NetApp FAS 2220 Release 8.1.4P8 7-Mode

 

I have no idea where the problem is. How could i solve this problem? Netappcase is opened but not solved.

Maybe someone knows this problem?

Moving Snapprotect Volumes to an other aggregate

$
0
0

Hello Community,

 

we use Snapprotect to protect our Exchange and vMWare environment. Snapprotect manages SnapVault Snapshots to a secondary cDot system as a Snapshot copy.

 

My question is: What do I have to take attention for, if I will move volumes on the secondary system from one aggregate to an other (same node) with the vol move command? Do I have to change settings in the storage policy of Snapprotect, because of changing the destination aggregate or does OCUM will manage the change of the protection set automatically?

 

Info: We already entered the new aggregate into OCUM Resource Pool.

 

KFU

 

 

SMO Clone Issue After Server Move

$
0
0

Greetings,

 

We are running into a cloning issue and looking for advice/thoughts.

 

Using SMO, we are cloning to a secondary destination server. The process "was" working before the secondary server was rebuilt.

The same settings were applied to the new secondary server.

The SOURCE and DESTINATION servers are are different subnets for data access, but both can communicate with the MGMT IP (primary storage system  IP).

 

A couple of things that I am unclear on:

1. What exacly is the datapath IP used for and do we really need it?

2. Where is SMO obtaining the "192.168.134.114" address from? Snapdrive config? Host file?

 

We have tried multiple configuration changes to no avail.

SNAPDRIVE communication with the storage systems has been verified on all hosts and works correctly.

 

Any help would be greatly appreciated.

 

 

ERROR] SMO-13032: Cannot perform operation: Clone Create.  Root cause: SMO-11007: Error cloning from Snapshot copy: FLOW-11019: Failure in ExecuteConnectionSteps: SD-00027: Error connecting filesystem(s) [/data] from snapshot smo_ebst_ebst3_f_h_1_8a829413552fe27401552fe2787b0001_0: SD-10016: Error executing snapdrive command "/usr/sbin/snapdrive snap connect -fs /data /data_AUTOCLONE -destfv 192.168.134.114:/vol/oranfs_ebsdbuatrac_data SnapManager_20160608093343286_oranfs_ebsdbuatrac_data -snapname 192.168.134.114:/vol/oranfs_ebsdbuatrac_data:smo_ebst_ebst3_f_h_1_8a829413552fe27401552fe2787b0001_0 -autorename -noreserve": 0001-136 Admin error: Unable to log on to storage system: 192.168.134.114

 

CONFIGS

-------------

 

SOURCE

# snapdrive config list
username     appliance name   appliance type
-----------------------------------------------
sdora-user   houfiler4a       StorageSystem
svc-sdora    houvntapoc1      DFM

 

# snapdrive config list -mgmtpath
system name   management interface   datapath interface
-------------------------------------------------------
houfiler4a    10.2.1.14              192.168.133.44

 

DESTINATION (Secondary)

 

username     appliance name   appliance type

-----------------------------------------------

sdora-user   houfiler4a       StorageSystem

svc-sdora    houvntapoc1      DFM

 

system name   management interface   datapath interface

-------------------------------------------------------

houfiler4a    10.2.1.14              192.168.134.114

 

Cheers!

 

Ken

 

SnapRestore license

$
0
0

I currently have a SnapRestore license for ONTAP ver. 8.1x, and I'm on 8.2 right now. I'm wanting to use SnapRestore for the VSC plugin. I read that with 8.2 you need a 28-42 character license, and mine is only 7 characters. Do I need to make another purchase of the license to be able to work with my current version?

 

Thanks in advance.


Solaris LDOMs running SMO

$
0
0

Does anyone have any clue whether we can run SMO on a solaris LDOMs?  The problems seems to be that the hypervisor masks the LUNs to the logical hosts as virtual drives.  From the solaris side, is there a way to do a direct mapping of the luns?  From the Netapp side, can I use NFS or Oracle DNFS to bypass this issue?

Snap-mirror causing intermittent 1135 cluster errors in Exchange 2010

$
0
0

Hi,

 

We have NetApp OnCommand System Manager 8.3.1 p2. The daily snap-mirror is intermittently causing Exchange mailbox servers to drop out of the DAG with event ID 1135 system event log errors;

 

  • Cluster node 'NNNNNN' was removed from the active failover cluster membership. The Cluster service on this node may have stopped.

It occurs approx 10 mins after the backup starts. Mailbox server affected is variable. Doesn't happen daily, sometimes 8 days between episodes.

The backup was set to start at 10pm. Issues occured at roughly 10:10. To prove it was the cause we moved it 11pm. The issue moved to 11:08pm.

 

Exchange is 2010 SP3 with latest rollups & hot-fixes. DAG is multisite (3 subnets). Samesubnet cluster heartbeats set to 2/10. Cross-subnet set to 4/10. i.e. Microsoft best practice.

 

Anyone experienced this?

Any recommendations?

Should we be using Snap-Manager for Exchange instead of OnCommand System Manager?

Is OnCommand System Manager fully compatible with Exchange 2010 SP3?

Are any specific settings required when backing up Exchange?

Are there any best practices for configuration/settings of OnCommand SM?

 

BTW: I'm a Microsoft Windows/AD/Exchange/etc person, not NetApp, so be gentle with me here.

 

Thanks for your help.

Martin

 

Exchange 2013 SP1 CU10 - Logs not Truncating

$
0
0

running Snap manager for exchaneg 7.1.1 a new install fw test mailboxes.  2 nodes in a DAG.  Everything completed fine but the the logs are not truncating.  No errors in the reports.  

 

Have a similar environment runnging excange 2010 with 2 node DAG everthign works perfectly with exchange 2010 and logs also truncate.  We compared both environments and they are identical.

 

I have been advised to open a ticket with Microsot.

 

Any thoughts?

 

Thank you

Snap Creator two jobs at the same time failed

$
0
0

Hi,

 

we run the Snap Creator for VMware Backups. When the Snap Creator starts two or more jobs at the same time (every day 0:00) some jobs failed with the error:

 

ERROR: SCF-00066: Agent validation failed for [10.79.3.200:9050] with error [Connection refused: connect].

 

but the other jobs run without problems. All jobs use the same agent.

Is it posible that the agent can handle only one job at the same time?

 

Tobi

Unable to delete backups using SMSQL

$
0
0

I'm trying to delete some old backups with SMSQL (more than 8 days old), but it's not working.

I saw this error in the log:

[09:07:25.532] [SVA0045] Querying backup data sets for database: BQ_MHUB_I...
[09:07:25.845] [SVA0045] Preparing LUN 'J:\Devices\ag-temp\', for SDAPI operation...
[09:07:44.204] [SVA0045] Snapshot enumeration failed.
[09:07:44.204] [SVA0045] [SDAPI Communication Exception]: The LUN may not be connected, because its mount point cannot be found.

[09:07:44.204] [SVA0045] SDAPI failed to enumerate snapshot.
[09:07:44.204] [SVA0045] [SDAPI Communication Exception]: The LUN may not be connected, because its mount point cannot be found.


[09:07:44.204] [SVA0045] Error Code: 0x80004005
Unspecified error

[09:07:44.204] [SVA0045] WARNING: Failed to retrieve snapshot information from SnapDrive.
[09:07:44.204] [SVA0045] Aborting backup deletion...
[09:07:44.204] [SVA0045] WARNING: Disk can run out of space if SnapDrive issue is not fixed as the delete operation cannot proceed.

[09:07:44.204] [SVA0045] Error Code: 0xc0040971
Failed to retrieve snapshot from SnapDrive. Backup deletion skipped.


[09:07:44.204] [SVA0045] Error Code: 0xc0040971
Failed to retrieve snapshot from SnapDrive. Backup deletion skipped.

 

The mountpoint J:\Devices\ag-temp\ has been deleted, but it looks like SMSQL is still looking for it. Is there a way to clear that mountpoint from SMSQL?

 

Thanks

Philip

Virtual Storage Console 6.2 P2 - NetApp Storage Discovery Task Stays Queued

$
0
0

Hello,

 

I upgraded to Virtual Storage Console 6.2P2 the other day and now I notice that each and every day I have a task in VMware that stays "queud up."  There is one form yesterday, and today, both stayed queued and I cannot cancel these.

 

I am running vCenter 5.5 Update 3D and our hosts are all ESXi 5.5 U3.  I have a NetApp 3240 running 8.1.4 7-mode.

 

The few backup jobs I have are running just fine, its just the task that is queued up is the problem.

 

I have attached a screenshot of what I am seeing.

 

Is this a known issue and will there be a fix?  

 

Thanks for your time! Smiley Surprised

 

P.S.  I made sure I followed the steps on "Removing vSphere Web Client UI extensions from the vCenter Server" that was detailed on page 10 of the "VSC 6.2 for VMware vSphere Release Notes" PDF

 

P.P.S I had to select a label for this post and I did NOT see one for Virtual Storage Console, so I just picked one so that I could move on creating this post.  Please ignore that as it isnt relative to this matter.

Snap Creator restore of Xenserver VM without autoboot

$
0
0

Is there a way to avoid the autoboot of a restored VM using Snap Creator and the xenserver plugin?

 

 


Snapprotect restore failure.

$
0
0

Hope someone can help.

Trying to restore a backup to an empty volume on a filer.

Snapprotect 10 and Ontap 8.2.2 in 7-mode.

I've attached the CVNasSnapRestore log.

 

SMO-13032: Cannot perform operation: Backup Restore. Root cause: SMO-11005

$
0
0

NFS volumes on a .99.xx subnet. Therefore, they are only exported to the .99 addresses.
SMO or snapdrive tries to use the default subnet ( in this case 117.xx ) when a restore operation is attempted.
We do not want to add the .117 subnet to the export list on filer to do a database restore via smo.
Is there an option that will override this undesired behavior?

 

--[ INFO] SMO-13036: Starting operation Backup Restore on host testdb2.lnx.aci.corp.net
--[ INFO] SMO-13046: Operation GUID 8ae4f5e955a712f50155a712f8f40001 starting on Profile TSTFOOT3
--[ INFO] SMO-22001: Started adding the Backup Restore operation in history.
--[ INFO] SMO-07431: Saving starting state of the database: tstfoot3(OPEN).
--[ INFO] SMO-07431: Saving starting state of the database: tstfoot3(OPEN).
--[ INFO] SMO-07127: Locked database for SnapManager operations - created lock file "/orabase/orahomes/12.1.0.2.160119/dbs/.sm_lock_tstfoot3" on host testdb2.lnx.aci.corp.net.
--[ INFO] ORACLE-20000: Changing state for database instance tstfoot3 from OPEN to STARTED.
--[ INFO] SMO-07200: Beginning restore of database "TSTFOOT3".
--[ INFO] SD-00022: Querying for snapshot stlpsan02:/vol/testdb2_tstfoot3_oraarc_nfs:tstfoot3_20160630_194042_2_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00023: Finished querying for snapshot stlpsan02:/vol/testdb2_tstfoot3_oraarc_nfs:tstfoot3_20160630_194042_2_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00022: Querying for snapshot stlpsan02:/vol/testdb2_tstfoot3_oractl_nfs:tstfoot3_20160630_194042_2_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00023: Finished querying for snapshot stlpsan02:/vol/testdb2_tstfoot3_oractl_nfs:tstfoot3_20160630_194042_2_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00016: Discovering storage resources for /mnt/tstfoot3/oractl.
--[ INFO] SD-00017: Finished storage discovery for /mnt/tstfoot3/oractl.
--[ INFO] SD-00004: Beginning restore of filesystem(s) [/mnt/tstfoot3/oractl] from snapshot tstfoot3_20160630_194042_2_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00005: Finished restore of filesystem(s) [/mnt/tstfoot3/oractl] from snapshot tstfoot3_20160630_194042_2_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] PLAT-00001: Copying file "/mnt/tstfoot3/oractl/SMOBakCtl_1467333637285_1" to "/mnt/tstfoot3/oractl/control01.ctl".
--[ INFO] PLAT-00001: Copying file "/mnt/tstfoot3/oractl/SMOBakCtl_1467333637285_1" to "/mnt/tstfoot3/oractl/control02.ctl".
--[ INFO] ORACLE-20000: Changing state for database instance tstfoot3 from STARTED to MOUNTED.
--[ INFO] ORACLE-20009: Attempting to reconnect to instance tstfoot3 after shutdown/startup.
--[ INFO] ORACLE-20011: Reconnect to instance tstfoot3 successful.
--[ INFO] SD-00022: Querying for snapshot stlpsan02:/vol/testdb2_tstfoot3_oradata_nfs:tstfoot3_20160630_194021_1_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00023: Finished querying for snapshot stlpsan02:/vol/testdb2_tstfoot3_oradata_nfs:tstfoot3_20160630_194021_1_8ae4f5e955a3e5f50155a3e5faa90001_0.
--[ INFO] SD-00016: Discovering storage resources for /mnt/tstfoot3/oratemp.
--[ INFO] SD-00017: Finished storage discovery for /mnt/tstfoot3/oratemp.
--[ INFO] SD-00016: Discovering storage resources for /mnt/tstfoot3/oraarc.
--[ INFO] SD-00017: Finished storage discovery for /mnt/tstfoot3/oraarc.
--[ INFO] SD-00016: Discovering storage resources for /mnt/tstfoot3/oractl.
--[ INFO] SD-00017: Finished storage discovery for /mnt/tstfoot3/oractl.
--[ INFO] SD-00016: Discovering storage resources for /mnt/tstfoot3/oraredo.
--[ INFO] SD-00017: Finished storage discovery for /mnt/tstfoot3/oraredo.
--[ INFO] SD-00052: Beginning preview of volume restore of [stlpsan02:/vol/testdb2_tstfoot3_oradata_nfs] (with host-side resources [/mnt/tstfoot3/oradata]) from snapshots [[tstfoot3_20160630_194021_1_8ae4f5e955a3e5f50155a3e5faa90001_0]]
--[ INFO] SD-10030: Waiting for SnapDrive job (running)
--[ INFO] SD-10030: Waiting for SnapDrive job (running)
--[ INFO] SD-10030: Waiting for SnapDrive job (running)
--[ INFO] SD-10030: Waiting for SnapDrive job (running)
--[ INFO] SD-10030: Waiting for SnapDrive job (running)
--[ INFO] SD-10030: Waiting for SnapDrive job (completed)
--[ERROR] FLOW-11019: Failure in CalculateRestoreScope: SD-10028: SnapDrive Error (id:1859 code:100) The host's testdb2.lnx.aci.corp.net interfaces, bond0.117 are not allowed to access the path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs. To resolve this problem, please configure the export permission for path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs so that host testdb2.lnx.aci.corp.net can access the path.
--[ERROR] FLOW-11008: Operation failed: SD-10028: SnapDrive Error (id:1859 code:100) The host's testdb2.lnx.aci.corp.net interfaces, bond0.117 are not allowed to access the path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs. To resolve this problem, please configure the export permission for path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs so that host testdb2.lnx.aci.corp.net can access the path.
--[ERROR] SMO-11005: Error restoring Snapshot copy: FLOW-11019: Failure in CalculateRestoreScope: SD-10028: SnapDrive Error (id:1859 code:100) The host's testdb2.lnx.aci.corp.net interfaces, bond0.117 are not allowed to access the path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs. To resolve this problem, please configure the export permission for path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs so that host testdb2.lnx.aci.corp.net can access the path.
--[ERROR] SMO-13032: Cannot perform operation: Backup Restore. Root cause: SMO-11005: Error restoring Snapshot copy: FLOW-11019: Failure in CalculateRestoreScope: SD-10028: SnapDrive Error (id:1859 code:100) The host's testdb2.lnx.aci.corp.net interfaces, bond0.117 are not allowed to access the path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs. To resolve this problem, please configure the export permission for path /vol/testdb2_tstfoot3_oradata_nfs on the storage system stlpsan02-nfs so that host testdb2.lnx.aci.corp.net can access the path.
--[ INFO] SMO-07131: Unlocked database for SnapManager operations - removed lock file "/orabase/orahomes/12.1.0.2.160119/dbs/.sm_lock_tstfoot3" on host testdb2.lnx.aci.corp.net.
--[ INFO] SMO-07433: Returning the database to its initial state: tstfoot3(OPEN).
--[ INFO] ORACLE-20000: Changing state for database instance tstfoot3 from MOUNTED to OPEN.
--[ INFO] ORACLE-20032: Opening database tstfoot3 with READ WRITE NORESETLOGS option.
--[ WARN] SMO-07434: Could not return database to its original state. Error: ORACLE-20001: Error trying to change state to OPEN for database instance tstfoot3: ORACLE-10003: Error executing SQL "ALTER DATABASE OPEN READ WRITE NORESETLOGS" against Oracle database tstfoot3: ORA-01610: recovery using the BACKUP CONTROLFILE option must be done

--[ INFO] SMO-13039: Successfully aborted operation: Backup Restore
--[ERROR] SMO-13048: Backup Restore Operation Status: FAILED
--[ INFO] SMO-22002: Successfully recorded the Backup Restore operation in history.
--[ INFO] SMO-14557: Sending E-Mail notification...
--[ INFO] SMO-14558: E-Mail notification sent successfully.
--[ INFO] SMO-13049: Elapsed Time: 0:04:34.727

Deleting Datasets and corresponding Snapshots

$
0
0

Hello Community,

 

out of changing some storage policies in Snapprotect we do have multiple protection sets and multiple SP-Volumes on Snapvault destination for Snapshot copy, for the same MSExchange database. We waited until all Snapshots aged but now we have the problem that the relationship is still aktive, althogh the data has aged (one Snapshot with aktive relationship was not deleted by data aging agent).

 

How can we remove the old relationship and the corresponding volume (snapshots inclusive) on Snapvault secondary without damaging something?

 

We first tried to delete the aged job in Snapprotect but there is no way to delete an aged job, because we can only find the aged job on the commcell level and not within the storage policy level.

 

We use Snapprotect 10R2 with OCUM 6.2 and cDot 8.3

 

Regards

 

KFU

Can Snapdrive for Unix create a file clone

$
0
0

Snapdrive for Unix 5.3 and CDOT 8.3.1P1.

 

We are currently using Snapdrive to create lunclones.

Is it possible to use Snapdrive to create file clones?

 

Thanks.

SnapCreator -- mirror-snapvault fanout

$
0
0

Is there a way to set up mirror-snapvault fanout with SC?
i have primary volumes for production HANA nodes hosted on SVM in one cluster and all my qa/dev nodes needs to be on SVM (ans therefore for refresh purpose mirrored from primary) in another while also vaulting from primary to secondary for tape-out?

 

Regards,

-Vlad

Viewing all 1244 articles
Browse latest View live