Quantcast
Channel: Data Protection topics
Viewing all 1244 articles
Browse latest View live

SDU issues on RHEL 7.2

$
0
0

I am running RHEL 7.2 and have installed Unified Host Utilities 7.1 and SDU 5.3.1, as well as sg3_utils and sg3_utils_libs. We are using iSCSI to present a LUN from an 8.1.4 7-mode filer. The LUN is presented and multipath has been configured.

 

[root@rheltest ~]# sanlun lun show
controller(7mode/E-Series)/                                  device          host                  lun
vserver(cDOT/FlashRay)        lun-pathname                   filename        adapter    protocol   size    product
---------------------------------------------------------------------------------------------------------------
bradtest-01                   /vol/testvol/lun               /dev/sde        host5      iSCSI      5g      7DOT
bradtest-01                   /vol/testvol/lun               /dev/sdd        host6      iSCSI      5g      7DOT
bradtest-01                   /vol/testvol/lun               /dev/sdb        host3      iSCSI      5g      7DOT
bradtest-01                   /vol/testvol/lun               /dev/sdc        host4      iSCSI      5g      7DOT
[root@rheltest ~]# multipath -ll 360a98000427045777a244a2d555a4535 dm-0 NETAPP ,LUN size=5.0G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 3:0:0:0 sdb 8:16 active ready running |- 4:0:0:0 sdc 8:32 active ready running |- 5:0:0:0 sde 8:64 active ready running `- 6:0:0:0 sdd 8:48 active ready running
[root@rheltest ~]# cat /etc/multipath.conf # All data under blacklist must be specific to your system. blacklist { wwid 36000c294abf87d45ac7e123f864a9c5b }

[root@rheltest ~]# rpm -qa | egrep "netapp|sg3|iscsi|multipath"
iscsi-initiator-utils-iscsiuio-6.2.0.873-35.el7.x86_64
netapp_linux_unified_host_utilities-7-1.x86_64
sg3_utils-libs-1.37-9.el7.x86_64
iscsi-initiator-utils-6.2.0.873-35.el7.x86_64
device-mapper-multipath-libs-0.4.9-99.el7.x86_64
device-mapper-multipath-0.4.9-99.el7.x86_64
netapp.snapdrive-5.3.1-1.x86_64
sg3_utils-1.37-9.el7.x86_64

So far so good. SnapDrive has been configured as follows:

 

[root@rheltest ~]# grep "^[^#;]" /opt/NetApp/snapdrive/snapdrive.conf
default-transport="iscsi" #Transport type to use for storage provisioning, when a decision is needed
fstype="ext3" #File system to use when more than one file system is available
multipathing-type="NativeMPIO" #Multipathing software to use when more than one multipathing solution is available. Possible values are 'NativeMPIO' or 'DMP' or 'none'
rbac-method="native" #Role Based Access Control(RBAC) methods
use-https-to-filer="on" #Communication with filer done via HTTPS instead of HTTP
vmtype="lvm" #Volume manager to use when more than one volume manager is available

[root@rheltest ~]# snapdrive config list
username    appliance name   appliance type
----------------------------------------------
root        bradtest-01      StorageSystem

[root@rheltest ~]# snapdrive storage show -all

 WARNING: This operation can take several minutes
          based on the configuration.
0001-185 Command error: storage show failed: no NETAPP devices to show or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use http for storage system communication and restarting snapdrive daemon.

I get the following on the filer log when I run "snapdrive storage show -all" from the host:

 

 

Fri Dec  2 10:44:25 MST [bradtest-01:app.log.info:info]: rheltest.local: snapdrive 5.3.1 for UNIX: (3) general: Connected Luns=0, DGs=0, HVs=0, FS=0, OS_Name=Linux, Platform=Red Hat Enterprise Linux Server 7.2 (Maipo), Kernel_Version=RHCK 3.10.0-327.el7.x86_64, Protocol=iscsi, File_System=ext3, Multipath_Type=none, Host_VolumeManager=lvm, Host_Cluster=no, Host_Virtualization=yes, Virtualization_Flavor=VMware, RBAC_Method=native, Protection_Usage=none

 

When starting SDU, I get the all too common storage stack error.

 

[root@rheltest ~]# service snapdrived restart
Stopping snapdrive daemon: Successfully stopped daemon

Starting snapdrive daemon: WARNING!!! Unable to find a SAN storage stack. Please verify that the appropriate transport protocol, volume manager, file system and multipathing type are installed and configured in the system. If NFS is being used, this warning message can be ignored.
Successfully started daemon

And get the following when running sdconfcheck:

 

[root@rheltest ~]# sdconfcheck import -file /tmp/confcheck_data.tar.gz

The data files have been successfully imported from the specified source.
[root@rheltest ~]# sdconfcheck check

NOTE: SnapDrive Configuration Checker is using the data file version  v12052013
  Please make sure that you are using the latest version.
  Refer to the SnapDrive for Unix Installation and Administration Guide for more details.

Detected Intel/AMD x64 Architecture
Detected Linux OS
Detected Software iSCSI on Linux
Detected   Ext3 File System

Did not find any supported Volume managers.
Detected   Linux Native MPIO

Did not find any supported cluster solutions.

Did not find any supported HU tool kits.
sdconfcheck: ../../../messageinfo/src/messages.cpp:145: void messageinfo::messageParse::parseRecord(std::string&, std::string&): Assertion `false' failed.
Aborted

Any thoughts? I've been beating my head against the wall on this for hours. Everything aligns with the IMT, and this is a brand new RHEL minimal install so it seems this SDU install should be textbook.


snapmirror SVM DR failed

$
0
0

Hi,

 

We have Ontap 8.3.2.

I created a vserver to test the SVM DR in our cluster. It's successful. But when I try to snapmirror our production vserver to a SVM DR vserver,  its failed to initialize the SVM DR snapmirror relatioinship.

 

The production vserver has a volume 1.9TB.

The error message was : 

Last Transfer Error: Failed to apply the source Vserver configuration. Reason: Apply failed for Object: hosts_byname Method: baseline. Reason: The IPV4 address specified with "-address" is not supported because it is one of the following:
multicast, loopback or 0.0.0.0.

 

The SVM DR vserver contains only the volumes and nis services from source vserver but no dns service and no cifs protocol.

 

Any ideals?

 

Thanks,

 

Chi

 

 

 

 

Secure NDMP for NDMP data connections?

$
0
0

I see the following option in CDOT 8.3:

   vserver services ndmp modify -is-secure-control-connection-enabled {true|false} 

 

which is documented as enabling NDMP connections using SSL from port 30000.  

 

My question is: Is there any mechanism to do the same for an NDMP data connection in a 3-way backup/restore?

 

Thanks,

Mike

Snap Manager for Hyper -v not seeing all guests within dataset

$
0
0

Hi all

 

So i have a strange issue, we have 5 2008r2 Hyper-V hosts and on these various vm guests.

The trouble I am having is that within the Snapmanager for Hyper-v dataset is that its not seeing all our guest vm's.

Its set to look at the cluster DNS name.

On 1 host its seeing all and then for example on another host its seeing 1 guest but none of the others.

Its very random and I have checked and done comparisons between vm's and there is no obvious answer.

 

We have 4 CSV's and I have done a cluster validation report and all is ok.

I am able to migrate storage and move vm's again without issue so I dont think the cluster or hyper v environment is to blame.

 

Unfortunately I arrived into the role and discovered this and so my theory is that my predecessor knew about it but left it be.

Obviously this means we are shy of backups.

 

Any ideas or pointers would be great

Thanks

SMSQL pot-command

$
0
0

Hello,

 

I am running a post-command after a backup successfull when the backup is successfull.

 

But when the backup is failing, like if I try to do a transactional log backup on a simple database, the post-command script is not executed. Is there a way of having the post-command script run each time no matter what the backup result is?

 

 

Disconnect Snapdrive clone without containing vol getting deleted

$
0
0

In Snapdrive for Windows versions prior to 7.1 There was a "feature" that meant if you renamed the flexclone volume created as part of a lun clone, when disconnecting from the lun - Snapdrive would not delete the containing flexclone volume.

 

Since version 7.1, this has stopped working - SD will now always delete the containing flexclone vol when disconnecting from the lun regardless of the name.

 

Is there an easy way round this in the later versions of Snapdrive?

We did have a use here where a snapshot would be taken of a primary system then a clone mounted on a secondary to run various tests/processes - once completed, it was not uncommon to then request the clone to be mounted back to the primary system (quicker than copying the data) - once on the primary system, then clone split would be run to tidy up.

 

Running clone split before disconnecting the clone from secondary is out of the question, as this process usually takes too long, and no checkpoint snapshots can be taken of the flexvol while this is occuring.

 

I know i can un-map the lun on the storage system and re-map manually - but i would like to avoid extra manual intervention if possible.

 

 

Any Ideas/suggestions welcome.

 

Thanks

Licence needed?

$
0
0

"Have multiple Windows servers using ISCSI  to my main filer, FAS3240. I would like to map a drive to another filer, a fas 2040, in a different location, so i can migrate data between the two.  Problem is it tells me i do not have a licence to so.

 

I receive the error with create disk wizard,  "the requested opeation is not pemitted since this functionality is not licenced"

 

Trying to establish a session gives me "the lun provisioning and snapshot management snapdrive module licence required for the requested operation is not present"

.

 

Why would i need a licence if i can already map to one filer?

 

Ontap 7 mode, 8.1.4P9 wih snapdive 6.3.1.4912

SMSQL post-command

$
0
0

Hello,

 

I am running a post-command after a backup successfull when the backup is successfull.

 

But when the backup is failing, like if I try to do a transactional log backup on a simple database, the post-command script is not executed. Is there a way of having the post-command script run each time no matter what the backup result is?

 

 


Snapcenter 1.1,VSC 6.2 - workaround for bug 996792 "Failed to run a command New-NcSnapshotMulti

$
0
0

If you're using SnapCenter 1.1 and the Netapp Virtual Storage Console for your VMware infrastructure, and you're having problems with backups, let my sordid tale help you in your hour of need.  Should you be getting errors from VSC that it cannot perform a snapshot on your local SVM, with "failed to run a command New-NcSnapShotMulti", or if your snapmirrors will not update, reporting the error " Activity 'Replicating to Secondary' failed with error: Unable to find SnapMirror destination(s) for source " , look below.

 

Because in case you haven't noticed - there's not much documentation for Snapcenter and VSC apart from, "Install and then register X with Y".  The Communities, useful tool they are, won't have everything you need to fix your backups.  Netapp Bugs list the bug related to this problem - 996792 - but the information for it is internal-only, so you won't see this problem if you search in the bugs (as of this writing, anyway.)

 

Below are the steps I made in fixing my Snapcenter 1.1 and VSC 6.2 (and 6.2.1) problems, and backups that have not run (automatically) in many months. 

 

This assumes that you have VSC 6.2 or VSC 6.2.1, and Snapcenter 1.1 installed. And that VSC and SnapCenter aren't run on the same server, because otherwise they slow to a crawl.

Also keep in mind that SnapCenter 1.1 *needs* to know your cluster IPs *and* your SVMs.

Also keep in mind that Virtual Storage Console must *NOT* know about your cluster IPs - SVMs only!

 

-

 

1. Make sure that any unused datastores on your VMware infrastructure are removed *if* they connect to an SVM that you do not want to appear in the VSC.   SVMs that show up as 'unknown' or are unmanageable break SC+VSC.

2. Remove "management" from any LIF on your SVMs that you do not want VSC to "see"; VSC will look at your various datastores and add management LIFs accordingly.  LIFs that you have on an SVM that are connected to your ESX hosts but *not* reachable by your VSC server should not appear - and must be removed.  
    - For instance:  SVM "vmware_svm" has management access enabled via data-only network 10.10.10.55, but your VSC server is at 172.17.40.12, with no route inbetween.
    - The non-reachable SVM *should* be removable unless there is a mounted datastore from that SVM. See #1.

    - In some cases you can use an alternate IP address for management of the SVM.  (See 8a)

3. On your VSC server, navigate to c:\Program Files\Netapp\Virtual Storage Console\etc\vsc

4. Move the file "vsc.xml" somewhere else.  This has the SVM/cluster information for VSC.

5. Restart the Netapp Snapmanager for Virtual Infrastructure and the Virtual Storage Console services.

6. Re-log into the web client.

7. On the VSC tab, under "storage systems", your previous storage systems will be gone, and there will only be ones listed as 'unknown' and list the IP (and not the name.)

8. Identify the IP addresses listed in "storage systems", and match them to their SVMs:
    a) If an IP address belongs to an SVM that is *not* required for VSC, then delete it. If VSC needs it, you won't be able to. It's fine.
    
    a) If an IP address is listed for an SVM via an unreachable IP (See #2), modify that entry, and replace the unreachable IP with an IP address on that SVM that *is* reachable by your VSC server.

9. Any SVMs that need to be configured in VSC but were not already there by default, add them.

10. Once all SVMs have been added, wait until they show up with an alert for insufficient privileges.  You can't avoid this, even if the user has enough privs -  and even if you modify the connection, make sure the password is good, you save it, and it says, "OK" - it will revert to insufficient privileges anyway.

11. At this point, unless there are issues with Snapcenter, things should "work", despite the errors that VSC has for each of the SVMs.

12. If SnapCenter is the point of failure for your backup jobs, it *might* be that the SnapCenter server has simply lost it's brain; if you attempt to update SVM connection via the "settings" menu and it claims it cannot find/connect to/et the SVM, simply delete it, and re-add it. (Yes, that's right, delete and re-add. It's not right, but there you have it.)

13. If you have a backup job that persists in failing with NC-NewSnapShotMulti, you can also try editing the backup job, "removing" the backup entities (datastores) from the list, and re-selecting them. This worked for one persistently failing backup job.  When I re-chose the datastores to back up, saved and ran it, it worked flawlessly.

 

 

-

 

Hopefully this will assist you. If you have any questions, bug me via email, and I'll see what I can do. I know far more about SC and VSC than I ever cared to know.

aggr snap restore hung

$
0
0

Greetings, 

    I am waiting for support to help me with this issue, but I thought I would try the community forum while I wait. I have the following problem:

 

I did an:

 

aggr copy aggr1 backup

 

Once this finished, I brought the aggr online to verify the data

 

Then I did:

 

snap restore -A nightly.0 backup

 

This resulted in the following:

 

WARNING! This will revert the aggregate to a previous snapshot.

All modifications to the aggregate after the snapshot will be

irrevocably lost.

 

Aggregate backup will be made restricted briefly before coming back online.

 

Are you sure you want to do this? y

 

It took a few minutes for this command to return. I now have an aggr that is in an offline state with the following error:

 

aggr status -v backup

           Aggr State           Status            Options

         backup unmounting      raid_dp, aggr     nosnap=off, raidtype=raid_dp,

                                                  raidsize=12,

                                                  ignore_inconsistent=off,

                                                  snapmirrored=off,

                                                  resyncsnaptime=60,

                                                  fs_size_fixed=off,

                                                  snapshot_autodelete=on,

                                                  lost_write_protect=on

Volumes: <none>

 

                Plex /backup/plex0: online, normal, active

                    RAID group /backup/plex0/rg0: normal

                    RAID group /backup/plex0/rg1: normal

 

It's been stuck in this state for the last 20 hours.

 

Hoping someone has some suggestions or ideas.

 

Thanks,

     -Steve

 

SnapManager for SQL Point in Time Restores

$
0
0

I'm wondering if anyone can help to shine some light on this for me as I can't seem to get a straight answer from our vendor who has configured our backup system. I am trying to work out what needs to be done to configure SnapManager for SQL so that we can continue to do point in time restores once the data has been snapvaulted to a secondary storage. Is this possible?

 

Also could someone confirm what we need to keep to be able to do a restore from a snapvault? As currently they have configured the system to keep a weekly snapvault of the database, logs & sinfo volumes. The sinfo volume just looks like another copy of the logs so I don't think that we need both and we can't select an option to do a point in time restore anyway we only have the option to restore to the time the backup was taken if we are restoring from a secondary snapvault location.

 

On our production units we are keeping 14 days worth of backups for point in time restores, with the switches below. I have posted these as these were originally misconfigured as the weekly backup was using the switch -RetainBackups  1 so it was clearing out the sinfo volume.

 

Daily backups:


"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup  –svr 'SERVERNAME'  -RetainBackupDays  14 -RetainShareBackupDays  14 -cpylgbkshare NOTHING_TOSHARE -lb -bksif -RetainSnapofSnapInfoDays 14 -rudays 14 –mgmt standard

 

Transation logs:

 

"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup  –svr 'SERVERNAME'  -cpylgbkshare NOTHING_TOSHARE -lgbkonly -bksif –mgmt standard

 

Weekly:

 

"C:\Program Files\NetApp\SnapManager for SQL Server\SmsqlJobLauncher.exe" new-backup  –svr 'SERVERNAME'  -cpylgbkshare NOTHING_TOSHARE -lb -bksif –mgmt weekly  -ArchiveBackup  -ArchivedBackupRetention Weekly

 

Btw we are running on 7-Mode

Can't restore files of 2 months old in Commvault

$
0
0

Hi everyone,

 

I'm new here, and I'm having troubles restoring files from tape in the Intellisnap Commvault.

As I didn't find any information in the google that might help, I registered here in order to see if someone could share some info regarding this.

 

I have a folder with the size of 2GB that needs to be restored from a VM (that is backed up every day), and when following the usual procedure to restore a file - by checking the "Backup history" and selecting the desired date - it only lists the backup jobs of the last 30 days.

The restore date that I need is more than 60 days old, but i can't seem to make it work, even when the tapes used on that backup, are inside the library.

 

The Commvault version is 11 (BUILD 80) and the library associated to it (where the tape is located), is an HP MSL4048 -- I don't know if you might need this.

 

I've already tried everything i could, but if you have anything else that might help, I'll be open for suggestion.

 

Thanks in advance.

 

Regards,

Ricardo

NetApp mirrored HA pair

$
0
0

In mirrored HA pair configuration how much uniq IPs are required.

 

  1. How much IPs for Cluster ?
  2. How much IPs for node ?
  3. How much IPs for per SVM ?

Dose both cluster use same SVM IP ?

 

If one cluster of mirroed HA pair is down then what happen ?. and by which IP we will got log file ?

 

Snapvault - Renaming Destination qtree (7Mode)

$
0
0

Hello,

question to the good old stuff 7-Mode.

Is it possible to rename the destination qtree for existing snapvault-relationship without a new baseline ?
By the way, we are also renaming source volume and of course the destination volume.
On the source, there is no qtree configured.


for example

 

Now:


primary-filer:/vol1/-  secondary-filer:/vol1/qtree1

 

 

New Target-Relationship:


primary-filer:/vol1_new/-  secondary-filer:/vol1_new/qtree1_new

 

 

DataOnTap 8.2.4P5 (7Mode)

Snapvault

 

 

kind regards
Gunnar

ndmpcopy authentication failed

$
0
0

Hi all,

I was working on ndmpcopy on Ontap 8.3.2.p2. 

The cluster is running in vserver scope and the ndmp is on. The ndmp protocol is enable on the vserver.

 

I run ndmpcopy and got the authentication failed message.

 

cluster::> node run -node node ndmpcopy  -sa ndmpuser:password  -da ndmpuser:password  source-ip:/vserver/volume dest-ip:/vserver/volume
Ndmpcopy: Starting copy [ 44 ] ...
Ndmpcopy: source-ip: Notify: Connection established
Ndmpcopy: dest-ip: Notify: Connection established
Ndmpcopy: Authentication failed for source
Ndmpcopy: Done



Thanks,

Chi


Snapmirror license in cluster cascade - "Snapmirror license not avaliable" error

$
0
0

Recently I have installed an additional cluster machine into an existing backup cluster. We now have a total of 3 clusters. Cluster A is a filer that Snapvaults to Cluster B. I have added a new machine Cluster C into the setup for backup. Cluster B will Snapmirror to Cluster C. I have purchased and installed Snapmirror/vault licenses on all 3 machines. Everything is working nicely between A and B. I have setup the intracluster relationship between B and C and the relationship status is good between the new machines.

 

When I try to create a Snapmirror relationship between B and C after a long pause in the ONTAP 9 GUI reports "Snapmirror license is not available" warning in the GUI. I have double checked in the GUI my license for Snapmirror on all machines and they are installed correctly. Does this mean I require an additional Snapmirror license for the B and C clusters?

 

*Cluster A* -- Snapvault -- *Cluster B* -- Snapmirror -- *Cluster C*

Filer                                      Backup node 1                     Backup node 2

 

 

Check existence of backup

$
0
0

Hi

 

I've been a SQL Server DBA for a while, but I'm new to SnapManager.  What I'm trying to do is create an automated system for checking that databases have been backed up (either natively or through SnapManager), and that the backup devices still exist at the time of the checks.  I know how to do this for native backups, but is there a PowerShell (or other) command that will check that the SnapManager backup of a particular database hasn't been deleted between when it was made and when it was checked, please?

 

Thanks

John

Snapmirror license not avaliable error

$
0
0

Recently I have installed an additional cluster machine into an existing backup cluster. We now have a total of 3 clusters. Cluster A is a filer that Snapvaults to Cluster B. I have added a new machine Cluster C into the setup for backup. Cluster B will Snapmirror to Cluster C. I have purchased and installed Snapmirror/vault licenses on all 3 machines. Everything is working nicely between A and B. I have setup the intracluster relationship between B and C and the relationship status is good between the new machines.

 

When I try to create a Snapmirror relationship between B and C after a long pause in the ONTAP 9 GUI reports "Snapmirror license is not available" warning in the GUI. I have double checked in the GUI my license for Snapmirror on all machines and they are installed correctly. Does this mean I require an additional Snapmirror license for the B and C clusters?

 

*Cluster A* -- Snapvault -- *Cluster B* -- Snapmirror -- *Cluster C*

Filer                                      Backup node 1                     Backup node 2

 

 

Snap Creator 4.3.0 - Using "Event Settings" only for "Failure Trap Message"

$
0
0

Hello,

 

we want to use the SC Event Settings for sending Failure Trap Messages only.

 

Information from the administration guide:

SENDTRAP:
Interfaces with your monitoring software or email, enabling you to pass the alerts that are generated from Snap Creator into your own monitoring infrastructure. The %MSG variable is the message sent from Snap Creator.
--> My unterstanding: This channel is the only one who sends the FAILURE_MSG (%MSG), but also the SUCCESS_MSG(%SUCCESS_MSG).

FAILURE_MSG:
Logs the failure message that is defined in case of a Snap Creator failure. This failure message can also be sent to SENDTRAP if SENDTRAP is defined.
--> My unterstanding: The FAILURE_MSG can only by send by SENDTRAP. For example: /usr/bin/mailx -s %MSG matthias@xxxxxx.de </dev/null

SUCCESS_TRAP:
Interfaces with your monitoring software or email, enabling you to pass the success message generated from Snap Creator into your own monitoring infrastructure. The %SUCCESS_MSG variable is the success message for Snap Creator.
--> My unterstanding: This channel can only snet the SUCCESS_MSG(%SUCCESS_MSG).

SUCCESS_MSG:
After a successful Snap Creator backup, this setting logs the message that is defined. The message is also sent to SUCCESS_TRAP, if SUCCESS_TRAP is defined, or to SENDTRAP, if SENDTRAP is defined.

--> My unterstanding: When no SUCCES_TRAP is defined, the SUCCESS_MSG will be sent by SENDTRAP.

 

Why are there 2 options to send the email?

One channel only for the success messages and one for both (if no SUCCES_TRAP is defined) or only for error messages.

 

We have many SC schedules for SnapMirror tasks, because we also want controll those tasks togheter with the SnapVault schedules together in the SC.

And it is not very handy when you get an success message every 15 minutes or so.

 

Is it possible to configure the event settings only for error messages?

 

My workaround #1 - Creating an own "null" output

Create empty file /opt/NetApp/scSuccessTrap/null
chmod 700 null


SUCCESS_TRAP=/opt/NetApp/scSuccessTrap/null

 

The SC executes the empty bash script null.The output will be sent into the nirvana.

SC log: Sending Trap message to external system /opt/NetApp/scSuccessTrap/null.

 

My workaround #2 - Redirecting the output to file instead of sending an email

Creating file /opt/NetApp/scSuccessTrap/redirect
chmod 700 null

 

File content:

#/bin/bash
timestamp=$(date +%Y%m%d_%H%M%S)
date=$(date +%Y%m%d)
echo $timestamp $1 >> /opt/NetApp/scSuccessTrap/"$date"_successtrap_redirect

 

SUCCESS_TRAP=/opt/NetApp/scSuccessTrap/redirect %SUCCESS_MSG

 

The SC executes the bash script redirect. The redirect generates a new file everyday with output from SC.

SC log: Sending Trap message to external system /opt/NetApp/scSuccessTrap/redirect "INFO: NetApp Snap Creator Framework finished successfully '(Action: backup) (Config: xxxxxx)'"

 

Output file example:

20170107_004519 INFO: NetApp Snap Creator Framework finished successfully '(Action: backup) (Config: xxxxxx)'
20170107_004519 INFO: NetApp Snap Creator Framework finished successfully '(Action: backup) (Config: xxxxxx)'

 

 

 

Regards,

Matthias

Error [scf-00013] occurs when starting a consisty group

$
0
0

Hello,

 

at our backup with snapcreator 3.5.0 we often find this error:

[Tue Jan 10 03:05:47 2017] ERROR: [scf-00013] in Zapi::invoke, cannot connect to socket at /</usr/local/scServer3.5.0/snapcreator>SnapCreator/ZAPIExecutor/StorageZAPIExecutor.pm line 88.

The next time, this backup works.

Any ideas?

Viewing all 1244 articles
Browse latest View live