Quantcast
Channel: Data Protection topics
Viewing all 1245 articles
Browse latest View live

SnapManager for SQL Restore of VMDK / NFS from Snapvault destination

$
0
0

Hi there

 

SMSQL supports data placed in VMDK files on a NFS datastore.

Placing the data in VMDK files has some management bennefits, yet when you add on a Snapvault destination, if becomes a different story in regards to restores.

 

We have a AFF8060 cDot 8.3 system where the primary data is placed, we use the above approach using NFS, and placing our SQL data in VMDK virtual disks.

Of cause we take case to create a specific datastore for DB+LOG, and one for SnapInfo.

We then create a snapvault relation on the volumes to a secondary system.

We would like to keep one weeks worth of snapshots on the primary system, and 12 months on the secondary system.

This is all setup and works just fine...  SnapDrive talks to VSC and snapshots are coordinated with VMware etc..  and even the Snapvault (Archive) option is avaliable inside SnapManager.

And it works just as expected when doing backups.

 

It besomes a bit more complicated when we want to restore databases older than one week, where we have to access the data on the secondary system.

First of all the backups on the secondary system are not shown inside SMSQL when choosing the Restore option.

 

You are also unable to mount the datastore from SnapDrive... I suspect that SD tries to mount this as a LUN, and it failes.

 

Only option that is left is:

1. Create Flexclone on the Secondary system

2. Export the Flexclone to the VM Cluster

3. Mount the datastore in vSphere

4. Attach VMDK to the SQL server

5. Copy the database and log files to the primary volume

6. Attach database in SQL

 

All the above are manual operations, of cause they can be scriptet, but it becomes even more complicated if you would like to do a up to the minute restore, ie. use the logfiles to roll forward.

 

I would like to know if any one has found another, and smarter way to deal with this?

Are there some new feature on the way from NetApp which "solves" this ?  (Snap Center maybe?  Yet I doubt it)

 

Only other way is to move to LUNs and attach via FC or iSCSI, which then gives us the restore of snapvault snapshots back in the GUI.

 

Or... (which is entirely possible) have I missed something in the setup?

 

/BM

 

 

 

 

 

 


VSC SMVI restore from SnapVault

$
0
0

Hi there

 

We have a AFF8060 cDot 8.3 system which exports NFS datastore to our vmware cluster.

We use VSC and SMVI to do our backups.

We have then added our secondary storage sytem as a snapvault target, which allows us to add the "Trigger SnapVault" on our jobs.

This all works as expected.

When we would like to do a restore from a snapshot on the primary, it is easily done in the GUI.

Yet, if we would like to restore from a snapvault snapshot on the seconadry system, it becomes somewhat harder... we do it like this:

 

1. Create FlexClone of the secondary datastore volume

2. Export the FlexClone to the VMware cluster

3. Mount this datastore inside vSphere

4. Register our VM, and start it

5. Use vMotion to move the VM back to the primary datastore.

(Of cause we can also copy out one specific VMDK)

 

If would however be great if this could be done via the VSC GUI.

Have I messed up something since I cannot do it via the GUI?

 

Or is this feature on the way?

 

/BM

 

 

SnapCreator 4.3.0 Retrieving SnapVault status error

$
0
0

Hi all,

 

I'm quite fresh to storage so pls forgive me any missing info. Just ask for more details and I will try to get it.

 

Recently we started to receive such error when snapvault job checks status of sv progress:

 

########## Checking SnapVault status for source relationship FILER:volume ##########

[2016-07-14 10:24:51,846] INFO: STORAGE-02110: Retrieving SnapVault status
[2016-07-14 10:24:51,846] ERROR: com.netapp.snapcreator.storage.executor.ZapiExecutorException: netapp.manage.NaAPIFailedException: Invalid empty value for input: maximum (errno=13115)
    at com.netapp.snapcreator.storage.executor.ZapiExecutorImpl.run(ZapiExecutorImpl.java:54)
    at com.netapp.snapcreator.storage.api.ontap.Ontap7ModeApi.executeRequest(Ontap7ModeApi.java:2314)
    at com.netapp.snapcreator.storage.api.ontap.Ontap7ModeApi.snapVaultStatus(Ontap7ModeApi.java:1357)
    at com.netapp.snapcreator.storage.StorageCoreImpl.snapVaultGetStatus(StorageCoreImpl.java:1371)
    at com.netapp.snapcreator.workflow.task.ZAPITask.snapVaultWait(ZAPITask.java:602)
    at com.netapp.snapcreator.workflow.task.ZAPITask.snapVaultUpdate(ZAPITask.java:1288)
    at com.netapp.snapcreator.workflow.task.SnapVaultTask.execute(SnapVaultTask.java:56)
    at com.netapp.snapcreator.workflow.impl.SCTaskCallableBlocking.call(SCTaskCallableBlocking.java:49)
    at com.netapp.snapcreator.workflow.impl.SCTaskCallableBlocking.call(SCTaskCallableBlocking.java:18)
    at java.util.concurrent.FutureTask.run(FutureTask.java:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:627)
    at java.lang.Thread.run(Thread.java:798)
Caused by: netapp.manage.NaAPIFailedException: Invalid empty value for input: maximum (errno=13115)
    at netapp.manage.NaServer.invokeElem(NaServer.java:751)
    at com.netapp.snapcreator.storage.executor.ZapiExecutorImpl.run(ZapiExecutorImpl.java:45)
    ... 12 more

 

 

Snapvault update finishes successfully but progress check fails which result in sv job fail and spoiling statistics Smiley Happy

I'm not quite sure where shall I search for value maximum which seems to be causing problem.

 

Has anyone encountered similar problem?

 

Regards,

Darek

Exchange 2010 SME 6.X/7.X Add 3rd NODE two working 2 NODE SME DAG HOW?

$
0
0

QUESTION: Should we make the NEW Server a NEW third member of the DAG BEFORE or AFTER the integration of SME and MOVE from temporary local storage to Netapp LUN's

 

 

Exchange 2010 SP3, RU14

SME6.X or SME 7.2

SRV 2008R2 SP1

 

How it is and is running now: We have a existing working and SME running 2-NODE Dag 

Netapp: SME is running on the other two NOES and we want to add another NODE

Target: We want to add another NODE to the DAG.

 

 

* The NEW 3rd node DAG SRV2008R2 with Exchange 2010 SP3 RU14 is setup and Interfaces LAN and DAG are correct

 

 

Now with SME we had to MOVE/MIGRATE from D: to the real LUNS

 

Question:

 

* Should we make the NEW Server a member of the DAG BEFORE or AFTER the integration of SME and MOVE from temporary local storage to Netapp LUN's

* I don't remember how we did that the last time

* I remember that in the DAG all DRIVES and PATHS have to be exact the SAME so how can we do this then?

 

 

 

 

2ND Question

 

If LUNS are all there SHOULD we make the MDB's DIRECT on the STORAGE. This realted to ECMP1649948.pdf page 19. Evene if we have to RUN the Migration wizard then.

CAN this be the identical PATH in the FIELD FROM and TO in below screenshots then?

 

 

SnapManager® 7.1 for Microsoft® Exchange Server, Installation and Setup Guide

ECMP1649948.pdf page 19

-----------------------------------------------------------------------------------------------------------------------------

Migrating databases and configuring SnapManager for Exchange servers

Before you can back up your databases using SnapManager, you need to run the SnapManager

Configuration wizard for each Exchange server. You use the Configuration wizard to migrate

databases to NetApp storage and to configure SnapManager for your Exchange servers.

About this task

 

 

Even if you create new databases directly on NetApp storage, you need to run the Configuration

wizard to create a mapping between those databases and the SnapInfo directory.

-----------------------------------------------------------------------------------------------------------------------------

 

 

Greetings from Switzerland

 

 

 

This shows how we did old the two FIRST NODES before ANY Productive Data was on the Exchange a few years ago.

Both MDB and the LOCAL database + TS LOGS where first PUT on the D: Drives of both DAG than migrated to the LUN with SME.

 

I can't do that with a NEW add node. Because the PATHS will be different TO the running existing setup....

 

 

 

 

 

 

Way we setup the FIRST TWO NODES

 

 

Existing DBS's and paths FINALL version with 2 NODES

[2016-07-18 17-07-16.jpg

 

SnapCenter 1.1 - VSC 6.2P2 attaching virtual disk failed

$
0
0

Hi together,

 

Problem: Attaching virtual disk stays at 50%  VSC attach.jpg

 

Environment:

Ontap 9.0RC1

VSC 6.2P2

SnapCenter 1.1

vCenter 6.0U2

SVM (6 volumes): NFS 

 

What we´ve done:

DomainUser with full rights in SnapCenter and vcenter, assign this user to VSC Backup role on the  which was created by using rbac.

Added the VSC-host in SnapCenter --> SnapCenter automatically added the SVM connection and detects the policies.

 

Backup is running fine, only attaching fails.

 

What could be the problem?

 

Thanks!

SM-SQL Configuration Sanity Check

$
0
0

Greetings All

 

I am a NetApp resident engineer currently in a test/POC phase with SM-SQL in a straight VMDK environment (no RDMs) backed by NetApp FC lun datastores, with CDOT 8.3.1. SM-SQL v7.2.1 is installed fine along with SnapDrive underneath (v7.1.3P1). My customer, although enthusiastic about SM-SQL, is adamant that it be configured in the following ways, both in apparant direct conflict with the install guide (let alone how SnapManager has worked for ~20 years like this):

 

1) He rails at the idea that the subject DB of a given server must be migrated to "other" NetApp disks, when arguably, the DB is already ON NetApp disk, on the VMDK (in the datastore served by the NetApp lun), as is the Logs disk. Says he's certain there was some way to do this "migration in place" even though I see no such alternate doc (and BTW this guy is a former NetApp SE!). His position is mgmt would kill him if he had to migrate every SQL DB in their environment to new disk. For example, source disks are D: for DB and G: for logs, destination disks are K: for User DBs, L: for logs and S: for SnapInfo. I think custy expects somehow (in Config Wizard) to map source D: with destination D: and Source G: with destination G:. Trying this just for grins, for D: drive the Wizard immmediately errors out complaining that User DBs cannot be on the same disk as system/master DBs and it would have to fall back to stream-based restores, duh, and so I'm stuck. However, I then hit upon the goofy idea: "Fine, then migrate the system/master DBs to K drive instead!" Then G: to G: and finally SnapInfo to S: So my resulting question is- will this even work and SM-SQL function properly? or some other similar trick? Or am I on crack?

 

2) Related to (1) but somwhat separate, regardless of "migration in place" or not, custy also rails at the idea of having to create separate VMDKs on separate datasores (on separate luns in separate volumes) for the destination disks;.and instead just thinks they can go all on the same existing datastore hosting the VM! This is in 100% direct contradiction with the doc (ref' capture attached for VMDK config from SM-SQL 7.2 Install guide), which clearly illustrates separate VMDKs in separate datastores within (stated elsewhere in the doc) separate volumes. Yes it's great that multiple DBs can fan into these 3 disks but you stull need to start with 3 new VMDKs on separate volumes. I *assume* this is becasue SM-SQL thinks with independent snapshots on the K, L and S volumes, it can snaprestore each with abaondon during restore. But the reality in this scenario is they'd all be on the SAME volume, becasue all the VMDKs are created in the same datastore, and as a result these snaps would trip each other up and wreck the restore job (and maybe even the DB itself). However, this in turn assumes SM-SQL does volume level snaprestores and not more selective single-file snaprestore, so as not to affect the other contents of the volume. If indeed single file snaprstore is done across all 3 VMDKs during the SM-SQL restore process, then in theory the 3 VMDKs (K, L, and S) just maybe *could* all peacefully coexist within the same volume. Or am I on crack again?

 

 

So that's it. The last thing is my custy is an extremely sharp storage architect in his own right (NetApp and other vendors), and so the answers I come back with either way must be rock solid defenable, with no uncertainty, or he will come back and find holes and continue to pick the issue apart.

 

SnapCreator 4.3 on AIX 6.1 Perl Installation Error

$
0
0

Hi,

 

I have a customer with a new AIX System (Version 6.1). We try to install the Snapcreator Agent 4.3 on this system but we get an error message.

 

Here the Output:

 

/opt/NetApp/scAgent4.3.0.

./snapcreator --setup
Can't load 'auto/Term/ReadKey/ReadKey.so' for module Term::ReadKey:     0509-022 Cannot load module auto/Term/ReadKey.
        0509-026 System error: A file or directory in the path name does not exist. at /</opt/NetApp/scAgent4.3.0/snapcreator>DynaLoader.pm line 219.
 at perlapp line 843.
BEGIN failed--compilation aborted at snapcreator.pl line 35.

 

The AIX 6.1 Version is:

oslevel -s
6100-09-07-1614

 

The Perl Modul has following Version:

lslpp -L|grep perl
  perl.libext                2.2.8.0    C     F    Perl Library Extensions
  perl.rte                 5.8.8.488    C     F    Perl Version 5 Runtime
  perl                       5.8.8-2    C     R    The Perl programming language
  perl-TermReadKey            2.30-1    C     R    A perl module for simple

 

 

 

I hope anyone has an idea for this issue

 

best regards,

Andreas

 

 

SMO repository update

$
0
0

We are migrating from 7mode to cDOT and are updating our SMO to the latest version 3.4P3. We had 3.3 and in the manual I read that the repository need to be updated. However I receive the beneath error on updating. That makes me wonder if a new repository 3.4 is actually available? On the other hand I don't get my profile/backup job ready for snapvault. I added my snapvault destination cluster to snapdrive (snapdrive config set ... and snapdrive config set -mgmtpath ....). From that point I was able to select the protection policy SnapManager_cDOT_Vault, but then it gives an error when applying

 
smo.jpg

 


SnapManager 3.4.0P3 does not support managing Oracle 9i databases. Using SnapManager 3.4.0P3 you will not be able to perform any SnapManager operations on existing profiles for Oracle 9i databases or create a new profile for an Oracle 9i database. Refer to the SnapManager for Oracle Installation and Administration Guide for more details.

Once this operation has begun, you will not be able to cancel it.  Are you sure you wish to proceed with the repository upgrade (Y/N)?y
[ INFO] SMO-09283: Existing old repository version "330"
[ERROR] SMO-09212: Existing repository schema version 330 (as smo_ot on jdbc:oracle:thin:@//[xxxxxxxxxx]:1521/xxxx) is already greater than or equal to required repository version 330.
[ERROR] SMO-13032: Cannot perform operation: Repository Update.  Root cause: SMO-09212: Existing repository schema version 330 (as smo_ot on jdbc:oracle:thin:@//[xxxxxxxxx]:1521/xxxx) is already greater than or equal to required repository version 330.
[ INFO] SMO-13039: Successfully aborted operation: Repository Update
[ERROR] SMO-13048: Repository Update Operation Status: FAILED
[ INFO] SMO-13049: Elapsed Time: 0:00:01.247
Operation Id [N00a0f31ee91ea0a00bae4775f977bd3c] failed. Error: Existing repository schema version 330 (as smo_ot on jdbc:oracle:thin:@//[xxxxxxxxx]:1521/xxxx) is already greater than or equal to required repository version 330.


mirror-SnapVault fanout scenario

$
0
0

Hello folks,

 

I'm trying to implement snapmirror-snapvault fanout scenario as picture shown.

Data protection deployment: mirror-vault fanout

Source volume has two relations to two differend clusters. One of them snapmirror, other one is snapvault. The replication process triggered by VSC & Snapcenter on vSphere. To do so there are consistent snapshot backup jobs and snapmiror and snapvault options checked for different schedules and retentions. There are 4 policies which I created. Hourly snapshot and snapmirror policy, daily snapshot and snapmirror policy, daily snapshot and snapvault policy, weekly snapshot and snapvault policy.

 

There is no problem about snapvault policy and snapshots. The volume on storage system C which is a vault system has daily and weekly snapshots only. But the same daily and weekly snapshots reside on source volume on storage system A. Is it possible to reside only last vaultsnapshot on the source I cannot find out yet. This is the minor problem.

 

The main problem is the snapmirror destionation volume on storage system B has all the snapshots of source volume. But the storage system B has no Snapvault license and the required snapshot retention is last 24 hourly and last 7 days of daily snapshots. I tried to create a new snapmirror  policy on the snapmirror destination storage applied keep only one sm_created snapshot but whatever I tried all the snapshots on source volume including daily and weekly labeled vault snapshots transfer via snapmirror.

 

What is the solution, is it snapmirror both destinations, and vault only on one of them ?

 

Thanks,

 

SSC

 

SMHV on 2012 core

$
0
0
We are in the process of migrating from VMware to Hyper-v. My question is on SMHV. The VM team has decided to run 2012 core and the storage protocol is SMB3. cDOT 8.3.2P1, SD 7.1.3P1, SMHV 2.1.1. I believe I have everything configured correctly. I am running SMHV from a 3rd server, administrative, that is running full 2012. In SMHV, on the administrative system, I am able to add the cluster, it sees the two cluster nodes and all the VM’s on the cluster. The issue arises when running the “Configuration Wizard” I get this error “StorageConnectionControl: An error occurred while processing SDGetStorageConnection request Object reference not set to an instance of an object”. I am able to set the SnapInfo directory which places the environment in a “configured” state, however when invoking the “SnapInfo Settings” a second time the correct SMB path is displayed but the SAN radio button is picked not the NAS. Does anyone have an idea what that error means, I could find not search hits on it? Do I have a valid configuration?

VSC restore option is missing version 6.2

$
0
0

Folks,

 

I do not see restore option in vcs 6.2.

 

Am i missing anything here ?

 

in version 4 there was backup & restore option

Commvault IntelliSnap for NetApp - Exchange Compliance Archiver

$
0
0

Hi Guys,

 

Now that SnapProtect has reached EOA, we're looking to license Commvault IntelliSnap for a customer. I understand that this is still a controller-based license from NetApp, which entitles you to download the CommCell from Commvault.

 

What I'm curious about is, does the Commvault IntelliSnap license include Exchange Compliance Archiver Agent, along with the other iData agents, or would this have to be purchased seperately from Commvault (on per capacity/CAL basis)?

 

Anyone know? We're trying to position against EMC data domain email archiving.

 

Alternatively, if you anyone knows what solution provides a similar funcionality on NetApp, I'd love some pointers.

 

Thanks!

SnapManager for SQL Snapvault archive issue - can you crack it?

$
0
0

A challenge for SMSQL lovers...

 

I have 3 SQL servers of 30 in total all running Snapdrive 7.1.1, SnapManager 7.2 with these 3 having issues connecting to a newly implemented 8.3.2 cluster/SVM used for Snapvaulting. I have had a Netapp technical case open, a Partner Helpdesk case after being told it was configuration by tech support and then back to tech support, escalated and now awaiting another escalation.

 

The issue is in adding credentials for these 3 servers to this SVM either via HTTP or HTTPS, Snapdrive hangs for 10 minutes before reporting various communication errors including "unable to get CDOT version". If default credentials are added in with the correct details for this clusters vsadmin account and then iSCSI connection attempted, it reports the same errors after a similar 10 minute timeout. Connection to non Snapvault/production SVMs within the same datacenter works.

 

For the 3 servers with issues, we have in summary:

 

We have checked Snapdrive configuration including checking connection to mgmt. LIF of SVM is being used as it was

Checked iSCSI interfaces are setup and added a secondary as one did not exist

Checked vsadmin account is setup correctly and it was

Checked DNS and tried via IP with the same result, tried many host file entry edits to test over past week and no changes. Resolution/reverse resolution is the same as for servers without the issue

Checked network route and is taking the same path as data is from servers without the issue

Observed that connection via HTTP or S is established in netstat –a output

Saw no evidence in logs on storage using event log show *iscsi*

Created new account as a test with vsadmin role, no change

Tried Snapdrive 7.1.3 and 7.1.1 both compatible with CDOT 8.3.2, circumnavigated Snapdrive via adding the ISCSI connection address directly into MS ISCSI Initator as a discovery portal which allowed connection directly through that software

Rebooted filer and made no difference

Created new SVM, tested from servers without issue and could connected, tested from 3 servers with issue and could not

Saw that in wireshark traces the "Client Hell" SSL headed packet is not replied to from the cluster with the usual "Server Hello, Certificate" message suggesting comms are stalling somewhere - this is where it is at with Netapp.

 

Any ideas on anything else to try? Let me know if you want any further data, I am in this for the long haul..Smiley Happy

 

Cheers!

Clustered Data ONTAP FC interface for NDMP

$
0
0

Hello, guys!

 

I gave up after searching information in Documentation, Community, FieldPortal etc. No word said about configuring FC interface on C-DOT for Tape connection for NDMP backup over Fibre Channel.

 

There is an informaiton for 7-mode, that says I just need to set fc port as an initiator and configure zoning between the port and Tape library. OK, there are physical ports in 7-mode with fixed WWPNs. But I cannot use them in C-mode, that uses LIFs with NPIV. Do I need to create the LIF? How can I configure FC interface to connect with the library?

 

I feel stupid myself as I believe that the solution is very very simple and I just misunderstatnd anything. Please help me sort out this stuff, provide a solution or link on the article or document.

 

Best regards,

Karakhan

SnapCreator Xen configuration and issues

$
0
0

Configuration of backup using SnapCreator Framework with the XEN plugin.

 

I'm writing this so that it might help others avoid the same issues I have encountered. Not that I'm completely done but documenting the progress so far. If you have other configurations that work better then just add with a comment for reference.

 

The documentation is a little basic on how to use the XEN plugin.

 

Setup for reference.

Xen 6.5

Clustered ONTAP 8.3.2

SnapCreator 4.3 running on Windows 2012R2.

XEN administration installed on the same server as SnapCreator Framework. (xe.exe)

NFSv3

SnapVault to secondary site

 

Experienced issues.

  • If the server name has spaces in the name the listing during backup fails. The listing in the gui works.
  • The agent timeout of the default 600 seconds was far too short. Had to increase it to 1800 seconds
  • The SnapCreator server is a VM. If it is quiesced then xe stops and the workflow dies. Removing this VM from the storagepool solved this.
  • Had to set APP_IGNORE_ERROR=Y in order to avoid the job to fail just because a single VM could not be quiesced.
  • Emailing the results - good backup and bad backup.

For email this configuration worked. The issue was where the single apostrophes was placed. Fine post regarding email here: http://community.netapp.com/t5/Backup-and-Restore-Discussions/Sending-email-from-Snap-Creator-Windows-Server-how/td-p/31147

 

 

Still having an issue with unquiescing the VM's and getting the SnapVault update triggered without getting errors.

If you have info regarding getting this into a fool proof state pls enlight me.

 

 

 

 

 

 

 

 


SnapCenter 1.1 - Unable to see NetApp volumes formatted as Windows ReFS filesystem

$
0
0

Environment is SnapCenter 1.1 server using SnapCenter Plug-ins Package 1.1 for Windows on a Windows 2012 R2 host running MS SQL Server 2014. The SQL instance has several databases. All of the databases reside on Windows volumes/disks that are NetApp LUNs.

 

Some of the Windows volumes are formatted NTFS while some of the other volumes are formated Windows Resilient File System (ReFS).

 

For the SQL databases that are on NTFS-formatted volumes, when you navigate to Inventory / <host> / Database / <SQL Server Instance>, and you select/highlight a database and click on the 'Details' button, SnapCenter properly shows the associated NetApp LUNs (in the Storage Name section),

 

For the SQL databases that are  on ReFS-formatted file systems, SnapCenter does not list any NetApp LUN names in the 'Storage Name' section.

 

So, I am curious if the SnapCenter Plug-ins Package 1.1 for Windows supports Windows volumes formatted with the ReFS file system format, when trying to do SQL database backups.

SnapCenter Plug-in for Oracle Database on Windows, Soon Come?

$
0
0
Can anyone let us know if Windows support will be added for SnapCenter Plug-in for Oracle Database? I'm starting to evaluate the product and look to migrate our SM jobs but this feature would be useful as we have new requirement to backup an Oracle server on Windows.

SMSP permissions issues - db_owner Permission to Stub Database

$
0
0

Hello,

We're working on SMSP implementation and currently have some issues.

One main error is being seen on application servers / frontend severs.

According the error, the Agent account doesn’t have db_owner permissions.

The thing is that the agent user is the farm admin, we checked on SQL MGMT Studio and it seems to have all the permissions (on user mapping, “dbo” appears on the reported database, and on the SharePoint databases).

Also, Farm is not being seen via the Backup tab. I guess it’s related to the errors described in Health Analyzer.

Will be happy if you suggest ways to troubleshoot this.

 

This is the error message (for both modules “Connector” and “Storage Manager”:

  • Rule: db_owner Permission to Stub Database.
  • Module: Connector + Storage Manager
  • Farm: Farm(SQLSERVERSmiley Very HappyBNAME_sharepoint_config)
  • Hostname: …
  • Sub-Category: SQL Permissions
  • Status: Error
  • Result
  • Explanation: The rule checks whether or not the Agent account has the db_owner permission to the stub database. The Agent account needs this permission to create the stub database table, and also create, delete, and update stubs.
  • Solution:
  1. Log in to the SQL Server Instance where the stub database resides.
  2. Navigate to security > Logins.
  3. Locate the Agent account. If the Agent account does not exists, add it.
  4. Double-Click the Agent account. The Login Properties pop-up window appears.
  5. Click User Mapping in the left navigation.
  6. In the Users mapped to this login field, select the stub database by selecting the corresponding checkbox.
  7. In the Database role membership for field, select the db_owner checkbox.
  8. Click OK to save your changes.

 

Environment Versions:

  • SMSP 8.2
  • SharePoint 2013 SP1
  • Windows 2012 R2
  • SQL 2012

 

Thanks !

SnapCenter High Availability

$
0
0

Hi All,

 

 

It’s been a while I used Snap Creator and seems like SC now stores its metadata in a MySQL DB.

Can you please help provide any reference architectures for protecting SC itself from a site level HA perspective?

 

Example - 

- How many VSC instances can SCF manage?

- How do we backup SC itself?

- How do we failover SCF? - SnapMirror the VM itself or a MySQL DB backup / restore to the passive instance?

 

Thanks

 

Modi

snapvault replication jobs getting failed

$
0
0

Hello,

 

I;ve snapvault jobs getting failed everyday.  Each day we getting snapvault failed messages like "Job terminated". 

 

when i check for the snapvault replication limitations, for FAS 8060 model, NetApp suggested that a maximum of 128 jobs allowed. I've deployed 105 snapvault replication operations on FAS8060 filer. Though the replication limit is below the maximum limit, our snapvault jobs getting failed.

 

can anyone sched some light on this issue?

 

 

Regards,

Phani

Viewing all 1245 articles
Browse latest View live