Hi community,
whats the best practice way to update a vault target by using SnapCenter?
For example, I have NFS datastores, but how can trigger my backup, so that the vault be update by managing by SnapCenter?
Thanks a lot. :-)
Hi community,
whats the best practice way to update a vault target by using SnapCenter?
For example, I have NFS datastores, but how can trigger my backup, so that the vault be update by managing by SnapCenter?
Thanks a lot. :-)
Hello Team,
I'm trying to install postgreSQL custom plugin for snapcenter 4.2 on a Centos 7.7.
I have modified my /etc/redhat-release to make my host looks like it's a real redhat.
So installation was succesfull.
I can see on my host that both scc and spl service are running:
[root@centos ~]# /opt/NetApp/snapcenter/scc/bin/scc status Checking status of SnapCenter PluginCreator Service SnapCenter PluginCreator Service is running as process 5549 [root@centos ~]# /opt/NetApp/snapcenter/spl/bin/spl status SPL:Checking status of SnapCenter Plugin Loader SPL:SnapCenter Plugin Loader is running as process 4102
[root@centos ~]# ps -edf | grep 5549 root 5549 5517 4 15:37 ? 00:00:29 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.232.b09-0.el7_7.x86_64/jre/bin/java -Xms128m -Xmx1024m -XX:MaxPermSize=256m -DINSTALL_PATH=/opt/NetApp/snapcenter/scc -classpath /opt/NetApp/snapcenter/scc/lib/scAgent-2.0-core.jar:/opt/NetApp/snapcenter/scc/etc:/opt/NetApp/snapcenter/scc/lib/* com.netapp.snapcreator.agent.nextgen.Starter start [root@centos ~]# ps -edf | grep 4102 root 4102 1 2 15:20 ? 00:00:34 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.232.b09-0.el7_7.x86_64/jre/bin/java -Xms256m -Xmx2G -XX:MaxMetaspaceSize=256m -XX:OnOutOfMemoryError=restart_plugin_loader_services.sh -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -jar spl-main-4.2.jar start -classpath /opt/NetApp/snapcenter/spl/lib/activation-1.1.1.jar:/opt/NetApp/snapcenter/spl/lib/bsh-2.0b6.jar:/opt/NetApp/snapcenter/spl/lib/cglib-nodep-3.2.9.jar:/opt/NetApp/snapcenter/spl/lib/commons-compiler-2.7.5.jar:/opt/NetApp/snapcenter/spl/lib/commons-lang-2.6.jar:/opt/NetApp/snapcenter/spl/lib/commons-lang3-3.8.1.jar:/opt/NetApp/snapcenter/spl/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/opt/NetApp/snapcenter/spl/lib/cxf-core-3.2.7.jar:/opt/NetApp/snapcenter/spl/lib/cxf-rt-frontend-jaxrs-3.2.7.jar:/opt/NetApp/snapcenter/spl/lib/cxf-rt-rs-client-3.2.7.jar:/opt/NetApp/snapcenter/spl/lib/cxf-rt-transports-http-3.2.7.jar:/opt/NetApp/snapcenter/spl/lib/cxf-rt-transports-http-jetty-3.2.7.jar:/opt/NetApp/snapcenter/spl/lib/jackson-core-asl-1.9.13.jar:/opt/NetApp/snapcenter/spl/lib/jackson-jaxrs-1.9.13.jar:/opt/NetApp/snapcenter/spl/lib/jackson-mapper-asl-1.9.13.jar:/opt/NetApp/snapcenter/spl/lib/janino-2.7.5.jar:/opt/NetApp/snapcenter/spl/lib/java-sizeof-0.0.4.jar:/opt/NetApp/snapcenter/spl/lib/javassist-3.19.0-GA.jar:/opt/NetApp/snapcenter/spl/lib/javax.activation-api-1.2.0.jar:/opt/NetApp/snapcenter/spl/lib/javax.annotation-api-1.3.jar:/opt/NetApp/snapcenter/spl/lib/javax.servlet-api-3.1.0.jar:/opt/NetApp/snapcenter/spl/lib/javax.ws.rs-api-2.1.1.jar:/opt/NetApp/snapcenter/spl/lib/jaxb-api-2.3.1.jar:/opt/NetApp/snapcenter/spl/lib/jaxb-core-2.3.0.1.jar:/opt/NetApp/snapcenter/spl/lib/jaxb-impl-2.3.1.jar:/opt/NetApp/snapcenter/spl/lib/jcip-annotations-1.0.jar:/opt/NetApp/snapcenter/spl/lib/jcommander-1.72.jar:/opt/NetApp/snapcenter/spl/lib/jetty-continuation-9.4.18.v20190429.jar:/opt/NetApp/snapcenter/spl/lib/jetty-http-9.4.18.v20190429.jar:/opt/NetApp/snapcenter/spl/lib/jetty-io-9.4.18.v20190429.jar:/opt/NetApp/snapcenter/spl/lib/jetty-security-9.4.18.v20190429.jar:/opt/NetApp/snapcenter/spl/lib/jetty-server-9.4.18.v20190429.jar:/opt/NetApp/snapcenter/spl/lib/jetty-util-9.4.18.v20190429.jar:/opt/NetApp/snapcenter/spl/lib/jsr250-api-1.0.jar:/opt/NetApp/snapcenter/spl/lib/log4j-1.2.17.jar:/opt/NetApp/snapcenter/spl/lib/logback-classic-1.1.4.jar:/opt/NetApp/snapcenter/spl/lib/logback-core-1.1.4.jar:/opt/NetApp/snapcenter/spl/lib/migration-4.2.jar:/opt/NetApp/snapcenter/spl/lib/nn-codegen-4.0.J535.jar:/opt/NetApp/snapcenter/spl/lib/ojdbc8-8.jar:/opt/NetApp/snapcenter/spl/lib/orika-core-1.5.2.jar:/opt/NetApp/snapcenter/spl/lib/paranamer-2.8.jar:/opt/NetApp/snapcenter/spl/lib/podam-4.7.3.RELEASE.jar:/opt/NetApp/snapcenter/spl/lib/slf4j-api-1.7.25.jar:/opt/NetApp/snapcenter/spl/lib/smcore-contracts-4.2.jar:/opt/NetApp/snapcenter/spl/lib/snapcenter-cli-4.2.jar:/opt/NetApp/snapcenter/spl/lib/spl-common-4.2.jar:/opt/NetApp/snapcenter/spl/lib/spl-main-4.2.jar:/opt/NetApp/snapcenter/spl/lib/stax2-api-3.1.4.jar:/opt/NetApp/snapcenter/spl/lib/woodstox-core-5.0.3.jar:/opt/NetApp/snapcenter/spl/lib/xmlschema-core-2.2.3.jar
I have install this plugin on port 8146
And as you can see my host is listening on that port:
[root@centos ~]# netstat -ltnp | grep -w ':8146' tcp6 0 0 :::8146 :::* LISTEN 4102/java
But SnapCenter GUI display the following error regarding services on my host:
Do i have any chance to get this plugin working ona Centos or do i must install a redhat before anything else?
TIA
Hi,
I am using snapcreator 4.3.3p1 for setting up backup of HANA 2 multi-tenant with a single database tenant.
I have created a single configuration which does local snapshot backup and file based backup.
Manual execution of file based backup works fine, but scheduled file based backups doesn't work, schedule doesn't execute at all. What could be the issue?
However, manual and scheduled snapshot based backups is working fine. Any help is appreciated.
Thanks in advance.
Hi
i habe snapcenter 4.1.1.1 and more resource group, but only by one i get the error
"No SnapMirror relationships were found. Resolution: Please make sure that secondary storage systems are registered and host can resolve them correctly.Unable to find SnapVault destination for the source volume(s)"
When i check on netapp show me onprotected volumes. everything is ok.
Primary backup is ok. but i see when i manually update the reletionship that there is no label daily by the snapshoot. Label is blank - . I also stopt replication and restart it. but error is still there.
can someone help me to resolve this problem
thanxs
Hello,
I am just testing the SnapCenter SQL Plug-In and preparing to migrate from SnapManager for SQL. The link below describes the three backup types, however to me the documentation doesn't really clarify what they mean practically speaking.
I want to replicate what we do in SnapManager which is a full nightly backup and tlog backups every 15 minutes. My guess is that I should schedule a nightly Full backup and log backup (to get a full + tlog) and then a Transaction log backup every 15 minutes. From what I can tell, the tlog backup job doesn't have a retention setting. Does it look at the Full backup and log backup job for the retention?
Also: is the simplest way to do this to make the SQL instance the lone resource of a resource group, and then apply both backup types to the resource group?
Lastly, I would assume that Full database backup is simply a full backup without a tlog backup. However, the description in the link below seems to indicate it is actually a system database backup only. If that is accurate, would I then schedule this type of backup separately?
Would love any feedback or assistance on this as I find the options very confusing.
Hello,
how is it possible to change the destination of the backup transaction logs (.trb) for Microsoft SQL-Server (2012) ? Does this change have to be made in SnapManager or in sql Server Management Studio ?
kind regards Lutz
I am thinking about using Snap Creator to create a Oracle DB based on snapshot clone on the remote site after put the db in a hot backup mode.
Can I use this thinking to do DR and why?
Hi,
I have installed SnapCenter 4.2.0 agents in two node SQL Server cluster with active-active configuration. But SnapCenter only can access to the instances running in the node which own cluster name resource. Does anybody know how can I work with the instances on the secod cluster node?
Hi guys,
We're having fun with Snapcenter and snap vault's - they work fine when using a fan-out configuration but when doing a cascade there is no support as per the below link.
Currently we are thinking of adding a post job script to add the snapmirror-label and using this to do a snap vault cascade outside of snap center.
Does anyone know if cascade support is coming?
We have a policy configured to delete our backup images after 7 days. The policy is not deleting the older backups causing our volumes to fill up with unexpired backups. We are running vSphere 6.7 Update 3.
Thanks - Chris
Hello,
I am planning to migrate our SnapCenter VMware Plug-In 4.1.1 to the Data Broker 1.0.1 sometime tomorrow. The PowerShell command listed is:
invoke-SCVOVAMigration -SourceSCVHost old-SCV-host-IP
-DestinationSCVOVAHost new-OVA-IP -OVACredential OVA-credentials
-ByPassValidationCheck -Overwrite -ContinueMigrationOnStorageError
-ScheduleOffsetTime time-offset
For the most part this seems straightforward, however I can find no explanation for the "ScheduleOffsetTime" switch. What does this mean? Syntax indicates we are to enter a time-offset but I don't know what that should be. Any ideas?
We are in the early stages of testing the SnapCenter Plug-In for SQL to replace SnapManager for SQL. We noticed today that tlog backups generate errors if any databases are in simple recovery mode. Since there are typically a mix of databases that are and aren't in SRM, is there any way to skip SRM databases during tlog backups?
Hi all,
Has anyone any experience getting scripts to call from the data broker policy- aka snapcentre plugin for vSphere? I have a perl script setup to do the same thing as this script in PowerShell for windows. If I run it within the data broker itself, it works fine, but calling it from the policy fails.
Documentation is a little sparse, it just says it needs to be perl (which it is) and to call it, I'm putting an absolute path in the pre-post area. Documentation is here for reference: https://library.netapp.com/ecm/ecm_download_file/ECMLP2861085
The path I'm putting in the policy is /home/scripts/dc1_srm_mgmt_01_volumes.pl
If I run "perl /home/scripts/dc1_srm_mgmt_01_volumes.pl" locally on the data broker it works just fine. I know with the PowerShell script I ran for windows for some reason it used the SYSTEM account not the service account for the agent. I'm wondering what account it uses to run the perl script on the data broker appliance? It might be a permission thing, I've tried chmod 777 for fun but no joy. Not sure about the owner, it's currently the diag user. Perhaps I should change this to root or maint?
Any help would be appreicated. Oh and yes please develop native cascade support, fan-out is too limited unless you've got enough bandwidth to run a mirror and a second vault across your line
Hello,
Per the Storage Automation Store, the following versions of DB2 listed below are supported with version 1.0 of the Plugin which is compatible with Snap Center Version: 4.0
IBM DB2 versions Supported: 9.7.0.9, 10.1.0.3, 10.5.0.3, 10.5.0.6, 10.5.0.7, 10.5.0.8,10.5.0.xxx Java (JDK) 1.8 or later.
I'm curious to know when DB2 11.1 will be supported by SnapCenter?
Hey guys,
I'm looking at doing a migration from Windows based SnapCenter Plug-in for VMWare VSphere (4.1.1) to NetApp DataBroker 1.0.1.
In reading the documentation I see how the Invoke-SCVOVAMigration requires a source and a destination deployed OVA.
My issue is that I would like to re-use the IP address that I put in place when I initially installed the Plug-in on Windows.
My reasoning is that I have a number of firewall rules I will need to put in place between the SnapCenter and the Plugin (which in this case is off-site).
So couple questions, any way we can manually export the metadata into some sort of file, power off the old plugin, register it and then power up the databroker, add it and then import the metadata?
Failing that, can I can re-ip a NetApp Databroker after the fact?
If so where would I have to reconfigure (in terms of the vCenter etc)??
Any input would be helpful.
Hello,
We are trying POC with SnapCenter and Microsoft Server-to-server storage replication with Storage Replica, the concept is creating cluster storage disk similar to Exchange DAG or SQL AlwaysON.
The issue is when we changing failover cluster role to secondary server SnapCenter is stopping to connect the LUN path to the data disk. In moment we are rolling back to first server the path to disk is back. Do someone saw similar issue or processed to get this solution to work.
Im wondering if there is a mechanism in the newer versions of SnapCenter to backup an oracle database and at the same time backup a related NFS volume.
We have an app with an Oracle back end, to get a proper application consistent backup we'd shut down the app, snapshot the oracle DB, and the NFS application at the same time (or as close as since its shut down at this point), then bring all the services back up. Now the stopping and starting of services we can use pre and post scripts in a policy...but do I really need to have two seperate jobs to backup one thing? There's got to be a way to manage it with one policy/job/setup....surely? We cant be in a unique position of wanting to this sort of thing.
Recover zip file password with the assistance of eSoftTools zip password recovery software easily. this tool having smart techniques- Mask attack, Brute force attack and Dictionary attack to remove password from file. you can also add your personal dictionary to save your time. software is fully tested with all windows variants so, it performs well and provides a free trial version to each user and gets back the t three letters of the password.
Read More:- Unlock zip file
snapcenter4.1
i have a sql server with 3 instances, for each instance i have a seperatly resource group.
and my problem is that only by one resource group (sql instance) by sql default db, master,msdb, in snapcenter under snapmirror, and vault are showing more snapshoots that in retention is define.
so it should be 31 but i have more as 31 , but when i check the snapshoots on netapp the number is ok.at the end i have to remove id in metadata from db repository.
the command cleanupsecondary dont do nothing.
have someone a ideea what should i do?
thanxs
Is there a way using the Swagger API (http://docs.netapp.com/ocsc-43/index.jsp?topic=%2Fcom.netapp.doc.ocsc-ag%2FGUID-F2F08997-953E-4C60-B572-F435A5BD77F5.html) to refresh a host/resource?
I see when using the application I access this URL:
https://servername:8146/ProvisionDisk/GetDisks
But I don't see any way to manually run a refresh on a host.