Quantcast
Channel: High Availability (Clustering) forum
Viewing all 201 articles
Browse latest View live

CSV IO redirection theory question

$
0
0

Hello!

Suppose there's an active/passive two-node HV cluster where all VMs are active on the first node:

Once there's a storage connectivity problem all IO would be redirected using node1/node2's Cluster/CSV adapters:

In order not to suffer any performance degradation inside VMs the speed of Cluster/CSV nics should be at least = or > then that of the iSCSI nics (>=10Gbps). To fulfill this requirement I must invest at least in an additional 10Gb switch + a couple of 10Gb nics.

Doesn't it make more sense in this situation to initiate a cluster failover to the node wich does not have any issues with the storage?

 - in this case the speed of 1Gb for the Cluster/CSV nics may be sufficient and there will no be need for the second 10Gb switch.

Q: Is it possible to configure AUTOMATIC failover due to storage connectivity problems (in spite of working heartbeats)?

Thank you in advance,

Michael




SQL 2012 Failover Cluster - unable to start because of 'Network Name' failed.

$
0
0

Hi all,

Running a 2012R2 Failover Cluster with SQL 2012. I'm unable to start the SQL 2012 Cluster Role because of the following error;

Log Name:      System
Source:        Microsoft-Windows-FailoverClustering
Event ID:      1069
Description:
Cluster resource 'SQL Network Name (SCSQLCL01)' of type 'Network Name' in clustered role 'SQL Server (VMM)' failed.

Failover cluster manager shows the following;

 

Observations thus far;

  • Passes all cluster validation tests (no issues)
  • Am sometimes seeing Kerberos errors in the log for both cluster members, but it's not consistent and I cannot pin down the cause;

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server scsqlcl01-2$. The target name used was MSServerClusterMgmtAPI/SCSQLCL01CORE.service.local.

  • The cluster computer object has been granted permissions on the cluster
  • All computer objects are created, and DNS entries are present.
  • It sometimes "just works". It comes online without a hitch and I can communicate with the cluster name using the SQL instance no problems

Any help would be appreciated.

Thanks.

 

 

 

 

Event Id 1196: Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason: DNS request not supported by name server.

$
0
0

Hello,

Need urgent help with Failover Clustering on Windows Server 2012 R2 host. Quorum disk assigned, shared disk assigned, Cluster created successfully. However, keep on getting the error 1196 on the cluster nodes stating they aren't able to connect with the DNS server. We have correct DNS servers assigned, ping works fine to Gateway, DNS and other machines on network. However, this error. We are working on a production cluster setup.

Also, we keep on getting this error where "Application Experience" service stops on its own and the suggestion is to reboot the machine.

urgent help required. Kindly advise ASAP !!

-Karan


private property name 'NodeWeight' had an invalid character

$
0
0

Hi, I get following error event daily times number of nodes on all nodes. Would appreciate if some could provide a solution for this

EventLog: Microsoft-Windows-FailoverClustering-WMIProvider/Admin 

General: Failover Cluster WMI Provider detected an invalid character. The private property name 'NodeWeight' had an invalid character but the provider failed to change it to a valid property name. Property names must start with A-Z or a-z, and valid characters for WMI property names are A-Z, a-z, 0-9, and '_'.

Details:

- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
  <Provider Name="Microsoft-Windows-FailoverClustering-WMIProvider" Guid="{0461BE3C-BC15-4BAD-9A9E-51F3FADFEC75}" /> 
  <EventID>6237</EventID> 
  <Version>0</Version> 
  <Level>2</Level> 
  <Task>1</Task> 
  <Opcode>0</Opcode> 
  <Keywords>0x8000000000000000</Keywords> 
  <TimeCreated SystemTime="2015-11-10T12:30:40.692004900Z" /> 
  <EventRecordID>5018</EventRecordID> 
  <Correlation /> 
  <Execution ProcessID="2600" ThreadID="5316" /> 
  <Channel>Microsoft-Windows-FailoverClustering-WMIProvider/Admin</Channel> 
  <Computer>HV07.center.org</Computer> 
  <Security UserID="S-1-5-21-1568039723-000000000-184960113-1282" /> 
  </System>
- <EventData>
  <Data Name="Parameter1">NodeWeight</Data> 
  </EventData>
  </Event>


-Siva

SQL Cluster Installation issue

$
0
0

hi all , 

i am trying to cluster virtual machine sql and my situation as the following : 

  • two hyper-v server connected to SAN Storage through iscsi .
  • three LUN presented from the SAN Storage to Hyper-v host .
  • i have create fail over cluster for both hyper-v and add the LUN to cluster shared disk CSV ( i didnt add any role to the cluster just cluster to have shared disk )
  • two sql  2014 virtual machine server are created and LUN mapped to virtual machine as iscsi and set optionEnable Virtual Hard disk for Sharing on virtual machine setting .
  • i have installed Fail over cluster on both virtual sql machine and all the test for cluster passed .
  • add the disk to sql fail over cluster and set them as Cluster Shared Volume.

now the issue is when installing the SQL i ma getting the below error message " he cluster on this computer does not have a shared disk available. To continue, at least one shared disk must be available."

i am already representing the LUN as CSV .

when removing the disk from the CSV the setup will work but resource will not move from one node to another node .

any idea ?

live Migration failed due to Hyper-V Virtual SAN switch issue

$
0
0

hi Folks ,

I have 3 servers connected to two san switches ,created two virtual san switch , one virtual machine created and has virtual HPA connected . when trying to move this virtual machine to another host LM failed with error


Live migration of 'Virtual Machine host2' failed.

Virtual machine migration operation for 'vm' failed at migration destination 'host2'. (Virtual machine ID 16741329-3B1B-42CE-926D-4F6486FA3CE6)

'host2' Synthetic FibreChannel Port: Failed to start reserving resources with Error 'This operation returned because the timeout period expired.' (0x800705B4). (Virtual machine ID 16741329-3B1B-42CE-926D-4F6486FA3CE6)

'host2': NPIV virtual port operation on virtual port (C003FF71E94E0004) failed with an error: The pre-determined timeout period for the virtual port operation (60 seconds) expired. (Virtual machine ID 16741329-3B1B-42CE-926D-4F6486FA3CE6)

Although all the configuration is like the following :

  • NPIV is enabled in all ports
  • the same ports in san switch created as the uplink of the same Virtual san switch name" all ports of San switch 1 created as VSAN1, all ports of san switch 2 created as VSAN2 "

storage live migration leaves files in the old location

$
0
0

I have Hyper-v cluster and a new Scale-put file server cluster. I am in the process of storage live migrating my vms into the SOFS cluster. Some Vms migrate over fine and other leave references to the old storage location when the migration completes.

I have configured constrained delegation between the Hyper-V host and also SMB delegation on the sofs nodes.

Any ideas on why it is not the storage live migration is not completely migrating over to the new location?

thanks so much for your time


Active Directory Certificate Services service can not start - Running on Windows Clustering

$
0
0

We have got a 2 node Failover Cluster running a shared service called ClusterCAS1. This has 2 dependencies , a shared disk that is up and running .. this is where it contains the database and the logs. Then it has a GENERIC SERVICE called Certification Authority which is failing.

I checked on the active node (a windows 2008 server) and found that theActive Directory Certificate Services service was in stopped state. I tried to restart but it failed. I also checked
ADCS role in server manager and got the below error.




Problems with connect to a share on a cluster 2008 with ip address

$
0
0
Hello,

I have built a new Cluster 2008 an copied solle Files via robocopy to the new cluster. Now I have the following problem.

Connect via \\IP-address\share --> does not work
Connect via \\servername\share --> does work

Has somebody an idea what this could be?

Cluster: 2 nodes, activ, activ

Connect via \\ip-address\c$ --> does work

Greetz
QuentinT

How to get notified about mode changes in CSV

$
0
0

Hello folks,

I have a user mode service that needs to be notified when a particular Clustered shared volume's mode is changed (say from Non-redirected mode to maintenance mode, or to redirected mode, etc.)

Is there an API which can help register for such notifications?

Thanks,

Ayush


Ayush

Proxy physical IP is communicate with NLB IP.

$
0
0

Dear All,

Recently we have installed new proxy server i.e. GFI Standalone 2015 on windows server 2012 R2 at our customer site.

for high availability we have configure 2 proxy server with NLB feature of windows server 2012 R2.

and also for internal application sites we have installed IIS role also, through IIS we have bind 8443 port and also create .pac file.

what we observed that in real time traffic of Proxy server its showing proxy physical IP sending request to NLB IP i.e. "http://NLB IP:8080/array.dll?Get.Routing.Script" because of this reason uploading is going high.

Please share your finding on the above case which can help us.

Failed to assign scsi shared storage to windows cluster 2012 standard edition

$
0
0

I am new to server 2012 and  trying to create a cluster for sql 2008 enterprise on windows server 2012. In order to achieve this i have configured a DC and two cluster nodes. All servers are running server 2012 standard edition. My windows vms are running on kvm hypervisor. I have created iscsi virtual disks on DC in order to share storage with cluster. I have created separate virtual disks for sqllogs and data etc. I had mounted shared storage with the same drive letters on both cluster nodes. The cluster node on which m running cluster manager can see and mount successfully all shared storage but mounted partitions on second node disappear for some reasons. I can see partitions in disk management and tried to online the disks but error appears" The specified disk or volume is managed by Microsoft failover clustering component. The disk must be in cluster maintenance mode and the cluster resource status must b e online to perform this operation". There are three consecutive event id's i saw in events (10, 70, 1) source : iScisprt : login request failed. "The login response packet is given in the dump data".  "Error occured when processing iSCSI logon request. The request was not retried" Initiator failed to connect to the target. Target IP address and TCP Port number are given in the dump data".

Before creating the cluster both nodes were able to persistent mount the partitions. If I manually shutdwon node 1 which is cluster manager, then node 2  becomes active node and partitions are visible and persistent mounted, then mounted partitions no longer visible on node 1. Has anyone experienced the same issue or know any solution for this? All comments are welcome and highly appreciated. Thanks

After destroying WFC, unable to successfully cleanup cluster in windows 2012R2

$
0
0

Hi,

After destroying the WFC,I am unable to create failover cluster on windows 2012R2. receiving error message "unable to successfully cleanup cluster in windows 2012R2"

Thanks

Nasir Karim

 

Failover Clustering storage

$
0
0

Hi,

i am using 2Nos. Server with OS windows 2008 R2 and 1 No. Normal Desktop PC with normal HDD and OS windows 2008 R2.

i want to used Desktop PC HDD add in the Cluster Storage for Database storing on the desktop HDD.

Now i am configure the Cluster on my Server , Cluster is showing Online But there is no disk in the Storage tab and when i am click on(Add a Disk) for adding the disk system is sowing the Error.

No disks suitable for cluster disks were found. For diagnostic information about disks available to the cluster, use the Validate a Configuration Wizard to run Storage tests.

as attached file.

Please suggest me how i can resolve the problem

Windows server 2012 R2 clustering between physical workstation and VM node

$
0
0

Dear all,

Now i have a physical server and virtual machine both installed with windows server 2012 R2, did it possible to build a cluster environment on physical server with one virtual machine. (Note: The virtual machine was build on VMware) 


Performance degradation after adding Storage Spaces pool into Failover Cluster

$
0
0

Good day!

We encountered the problem of performance degradation after adding Storage Spaces pool into Failover Cluster.  Outside the Failover Cluster Storage Spaces pool works just fine.

We have two servers running Windows Server 2012 R2 Standard with JBOD connected by SAS. At one of them we've created storage pool made of 72 SAS drives (12 SAS SSD 800 GB и 60 SAS HDD 1,2 TB). The pool contains 4 Virtual Disks (Space) of same configuration: 2-way mirror with tiering, 1GB writeback cache, 4 colums, 64 KB interleave. The pool also contains quorum Virtual Disk (witness disk) of the following configuration: 3-way mirror without tiering and writeback cache, 4 colums, 64 KB interleave.

We have tested performance for both Virtual Disks layers (SSDTier and HDDTier) with iometer. Their results were great just as expected – high IOPS and low latencies. For testing purposes we used file pinning with Set-FileStorageTier, then Optimize-Volume -TierOptimize.

Between two servers we’ve created Failover Cluster. During Cluster Validation Tests no problems were noticed. Then we added all Virtual Disks into the cluster and have assigned the witness-disk for them.

With Failover Cluster Manager we have added four roles of “File Server for general use“ (not Scale-Out File Server). For each of the file servers we have assigned separate Virtual Disk.

During the same iometer’s performance tests we saw noticeable performance degradation for all Virtual Disks. Result analysis revealed that the root cause of regression is highly increased HDDTier latency (from 2 to 5 times beginning with queue depth = 1) for both read and write operations.

We decided to disassemble the cluster and completely clear its Storage Spaces pool configuration. Then we reassembled the pool of Virtual Disks of the same configuration. New iometer test performance results were fine. Then we recreated the Failover Cluster and added disks into it. At this time we didn`t add File Server roles. And again performance test results showed us increased latencies (the same from 2 to 5 times).

We have repeated our experiment several times and results were the same – performance degraded right after the pool and Virtual Disks were added into the Failover Cluster. It’s became obvious that the cluster is the reason of degradation.

We have made full hardware testing with powershell ValidateStorageHardware.ps1 script

https://gallery.technet.microsoft.com/scriptcenter/Storage-Spaces-Physical-7ca9f304 and it didn’t found any problem.

We have changed the testing tool to diskspd. Test results were a slightly better than iometers so we decided that iometer doesn’t work right with clustered drives.

We decided to perform high load cluster testing at production environment. Just after the SSDTier was filled up and HDDTler started using we began to receive complains from our clients. Perfmon have detected high latencies (from 20 ms and more) although the workload was not very high.

At our production environment we have another Storage Spaces File Server (it is not a part of cluster). It is based on 30 drives pool (10 SATA SSD plus 20 SAS HDD) and works just fine – HDDTier latencies are never rise more than 6-8 ms nevertheless the workload is much higher than for the new one.  The workload character is the same.

Our new pool and VirtualDisks were created with according to best practice advice and recommendations:

-          drive count not more than 80 (we have 72)

-          drive capacity not more than 10 TB (our VirtualDisk about 9 TB (1TB SSDTier + 8Tb HDDTier))

VirtualDisk was created with compatibility for FastRebuild (1 SSD and 2 HDD were reserved).

WriteBack cache is 1GB. Disks caching option are disabled.

 

Can anybody help us with this cluster situation?

Things we already have tried to do:

-          checked write back cache amount influence– it doesn’t affected

-          checked SAS HDD MPIO policy (by default– RR, try - LB and FOO) – it doesn’t affected

-          checked disk own writeback policy settings (now it turned off on every our disks) – it also doesn’t

The cluster and the pool configurations were cleaned with Clear-SdsConfig.ps1 (https://gallery.technet.microsoft.com/scriptcenter/Completely-Clearing-an-ab745947).


Driver developer question. How to save metadata inside the CSV volume before system restart?

$
0
0

I  have a CSV-volume problem on which I have got stuck. I am attaching my minifilter below CSVFs, and during InstanceTeardownStart callback I am failing to save(write) metadata file with STATUS_FILE_INVALID. So files are already invalidated here.. also I can't use Pre IRP_MJ_SHUTDOWN that comes later, the same for query teardown. So how can I save metadata inside CSV volume during restart? See no possibilities.. 

I know that only the MS CSV developers can help with this, and don't know better place to ask this.

What's New in Failover Clustering in Windows Server Technical Preview

Rhs.exe is the cause for the bug check(blue screen error) on windows 2012R2 std edition

$
0
0


We have setup 3 node hyper-v cluster and every two weeks one of the node will recover from the bug check(blue screen error) causing VM's to failover to other nodes or to be in hung state. I am suspecting the problem is with CSV. Please someone help on this.

below is the cluster log:

00000f08.0000494c::2016/07/20-20:59:28.664 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
00000f08.0000494c::2016/07/20-20:59:28.680 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
00000f08.0000494c::2016/07/20-20:59:28.696 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
00000f08.0000494c::2016/07/20-20:59:28.696 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
00001a24.00004d70::2016/07/20-20:59:48.421 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00001ce8::2016/07/20-21:00:25.263 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00004738::2016/07/20-21:00:25.263 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00000ff8::2016/07/20-21:00:25.263 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000001f8::2016/07/20-21:00:25.263 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000028b4::2016/07/20-21:00:25.388 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000028b4::2016/07/20-21:00:25.388 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000028b4::2016/07/20-21:00:25.388 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2

00000f08.00001544::2016/07/21-01:33:14.358 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2

00000f08.00001814::2016/07/21-01:33:14.358 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00004c2c::2016/07/21-01:33:14.358 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000017ac::2016/07/21-01:33:14.358 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.0000224c::2016/07/21-01:33:14.358 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000017ac::2016/07/21-01:33:14.483 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002890::2016/07/21-01:33:14.483 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000073c.00003cb0::2016/07/21-01:33:21.398 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000073c.00003cb0::2016/07/21-01:33:21.398 INFO  [CAM] Token Created, Client Handle: 80006c7c
00001a24.00000e64::2016/07/21-01:33:50.909 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00001814::2016/07/21-01:34:14.538 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.0000224c::2016/07/21-01:34:14.538 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:34:14.538 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.0000224c::2016/07/21-01:34:14.538 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003300::2016/07/21-01:34:14.538 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003300::2016/07/21-01:34:14.663 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:34:14.663 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.0000224c::2016/07/21-01:34:31.770 INFO  [DCM] HandleSweeperRecheck
00000f08.0000224c::2016/07/21-01:34:31.770 INFO  [CLI] LsaCallAuthenticationPackage: 0, 0 size: 4, buffer: HDL( 842bef0000 )
00000f08.00000834::2016/07/21-01:34:31.770 INFO  [DCM] HandleRequest: dcm/connectivityCheck
00000f08.0000224c::2016/07/21-01:34:31.770 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume2\ => \\?\Volume{9e8c0ad8-b0dd-43e1-a1ea-686bdf4aa582}\
00000f08.0000224c::2016/07/21-01:34:31.770 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume1\ => \\?\Volume{baf4e645-bc8b-4a6c-8881-98cbeb2df054}\
0000073c.00003cb0::2016/07/21-01:34:48.618 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000073c.00003cb0::2016/07/21-01:34:48.618 INFO  [CAM] Token Created, Client Handle: 800064d0
00001a24.00003950::2016/07/21-01:34:50.920 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00002890::2016/07/21-01:35:14.700 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003300::2016/07/21-01:35:14.700 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002890::2016/07/21-01:35:14.700 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000017ac::2016/07/21-01:35:14.700 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.0000224c::2016/07/21-01:35:14.700 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003300::2016/07/21-01:35:14.825 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000017ac::2016/07/21-01:35:14.825 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000073c.00004a64::2016/07/21-01:35:24.829 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000073c.00004a64::2016/07/21-01:35:24.829 INFO  [CAM] Token Created, Client Handle: 80003168
00001a24.00002c0c::2016/07/21-01:35:50.931 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.000017ac::2016/07/21-01:36:14.869 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.0000224c::2016/07/21-01:36:14.869 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002890::2016/07/21-01:36:14.869 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002d2c::2016/07/21-01:36:14.869 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002d2c::2016/07/21-01:36:14.869 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:36:14.978 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003300::2016/07/21-01:36:14.978 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00001a24.00002c0c::2016/07/21-01:36:50.934 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
0000073c.00002970::2016/07/21-01:36:51.403 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000073c.00002970::2016/07/21-01:36:51.403 INFO  [CAM] Token Created, Client Handle: 80003988
00000f08.00002890::2016/07/21-01:37:15.042 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002d2c::2016/07/21-01:37:15.042 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002d2c::2016/07/21-01:37:15.042 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002890::2016/07/21-01:37:15.042 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:37:15.042 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:37:15.167 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.000017ac::2016/07/21-01:37:15.167 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:37:31.825 INFO  [DCM] HandleSweeperRecheck
00000f08.00001544::2016/07/21-01:37:31.825 INFO  [CLI] LsaCallAuthenticationPackage: 0, 0 size: 4, buffer: HDL( 842bef0000 )
00000f08.00000834::2016/07/21-01:37:31.825 INFO  [DCM] HandleRequest: dcm/connectivityCheck
00000f08.00001544::2016/07/21-01:37:31.825 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume2\ => \\?\Volume{9e8c0ad8-b0dd-43e1-a1ea-686bdf4aa582}\
00000f08.00001544::2016/07/21-01:37:31.825 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume1\ => \\?\Volume{baf4e645-bc8b-4a6c-8881-98cbeb2df054}\
00001a24.00002c0c::2016/07/21-01:37:50.945 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.000017ac::2016/07/21-01:38:08.182 INFO  [RCM] [GIM] Scheduling Local Node Crawler to run in 300000 millisec.
00000f08.00001544::2016/07/21-01:38:15.207 INFO  [RCM [RES] Virtual Machine BHRMPRDPTAP2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001544::2016/07/21-01:38:15.207 INFO  [RCM [RES] Virtual Machine BHRMPRDPTCI embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00002890::2016/07/21-01:38:15.207 INFO  [RCM [RES] Virtual Machine BHRMPRDSM embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003a08::2016/07/21-01:38:15.207 INFO  [RCM [RES] Virtual Machine BHRMPRDBWDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003a08::2016/07/21-01:38:15.207 INFO  [RCM [RES] Virtual Machine BHRMPRDERPDB embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00003a08::2016/07/21-01:38:15.332 INFO  [RCM [RES] Virtual Machine BHRMPRDWD2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000f08.00001814::2016/07/21-01:38:15.332 INFO  [RCM [RES] Virtual Machine SAPRouter embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00001a24.00002e54::2016/07/21-01:38:50.948 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001a24.00003698::2016/07/21-01:39:50.958 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00003a08::2016/07/21-01:40:31.872 INFO  [DCM] HandleSweeperRecheck
00000f08.00003a08::2016/07/21-01:40:31.872 INFO  [CLI] LsaCallAuthenticationPackage: 0, 0 size: 4, buffer: HDL( 842bf70000 )
00000f08.00000834::2016/07/21-01:40:31.872 INFO  [DCM] HandleRequest: dcm/connectivityCheck
00000f08.00003a08::2016/07/21-01:40:31.872 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume2\ => \\?\Volume{9e8c0ad8-b0dd-43e1-a1ea-686bdf4aa582}\
00000f08.00003a08::2016/07/21-01:40:31.872 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume1\ => \\?\Volume{baf4e645-bc8b-4a6c-8881-98cbeb2df054}\
00001a24.000049c4::2016/07/21-01:40:50.965 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001a24.00001984::2016/07/21-01:41:50.979 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001a24.000010f0::2016/07/21-01:42:50.984 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00001804::2016/07/21-01:42:58.908 INFO  [GEM] Node 1: Deleting [2:46817 , 2:46835] (both included) as it has been ack'd by every node
0000073c.00002abc::2016/07/21-01:43:11.685 INFO  [CAM] Token Created, Client Handle: 80004fc8
00000f08.00004b08::2016/07/21-01:43:12.733 INFO  [GUM] Node 1: Executing locally gumId: 20020, updates: 1, first action: /dm/update
0000073c.00003e78::2016/07/21-01:43:13.029 INFO  [CAM] Token Created, Client Handle: 80003120
00000f08.0000485c::2016/07/21-01:43:13.725 INFO  [GEM] Node 1: Deleting [3:3415 , 3:3415] (both included) as it has been ack'd by every node
00000f08.00001804::2016/07/21-01:43:14.069 INFO  [GUM] Node 1: Processing RequestLock 2:1551
00000f08.0000485c::2016/07/21-01:43:14.069 INFO  [GUM] Node 1: Processing GrantLock to 2 (sent by 3 gumid: 20020)
00000f08.00001814::2016/07/21-01:43:14.069 INFO  [GUM] Node 1: Executing locally gumId: 20021, updates: 1, first action: /dm/update
00000f08.00001804::2016/07/21-01:43:15.069 INFO  [GEM] Node 1: Deleting [2:46836 , 2:46837] (both included) as it has been ack'd by every node
00000f08.0000485c::2016/07/21-01:43:15.069 INFO  [GEM] Node 1: Deleting [3:3416 , 3:3416] (both included) as it has been ack'd by every node
00000f08.00002d2c::2016/07/21-01:43:31.927 INFO  [DCM] HandleSweeperRecheck
00000f08.00002d2c::2016/07/21-01:43:31.927 INFO  [CLI] LsaCallAuthenticationPackage: 0, 0 size: 4, buffer: HDL( 842c430000 )
00000f08.00000834::2016/07/21-01:43:31.927 INFO  [DCM] HandleRequest: dcm/connectivityCheck
00000f08.00002d2c::2016/07/21-01:43:31.927 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume2\ => \\?\Volume{9e8c0ad8-b0dd-43e1-a1ea-686bdf4aa582}\
00000f08.00002d2c::2016/07/21-01:43:31.927 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume1\ => \\?\Volume{baf4e645-bc8b-4a6c-8881-98cbeb2df054}\
00001a24.000010f0::2016/07/21-01:43:50.993 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001a24.00003f4c::2016/07/21-01:44:51.002 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001a24.00002948::2016/07/21-01:45:51.005 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001b24.00001b48::2016/07/21-01:46:15.342 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 15 milliseconds for resource 'Virtual Machine BHRMPRDBWDB'.
00001cec.00001d10::2016/07/21-01:46:15.342 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 15 milliseconds for resource 'Virtual Machine BHRMPRDPTAP2'.
00001d18.00001d3c::2016/07/21-01:46:15.342 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 15 milliseconds for resource 'Virtual Machine BHRMPRDPTCI'.
0000182c.00001ab0::2016/07/21-01:46:15.342 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 15 milliseconds for resource 'Virtual Machine BHRMPRDERPDB'.
00001d70.00001d94::2016/07/21-01:46:15.342 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 15 milliseconds for resource 'Virtual Machine BHRMPRDSM'.
0000182c.00001ab0::2016/07/21-01:46:15.342 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
0000182c.00001ab0::2016/07/21-01:46:15.342 ERR   [RHS] Resource Virtual Machine BHRMPRDERPDB handling deadlock. Cleaning current operation and terminating RHS process.
00001d18.00001d3c::2016/07/21-01:46:15.342 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
00001d18.00001d3c::2016/07/21-01:46:15.342 ERR   [RHS] Resource Virtual Machine BHRMPRDPTCI handling deadlock. Cleaning current operation and terminating RHS process.
00001d70.00001d94::2016/07/21-01:46:15.342 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
00001b24.00001b48::2016/07/21-01:46:15.342 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
00001d70.00001d94::2016/07/21-01:46:15.342 ERR   [RHS] Resource Virtual Machine BHRMPRDSM handling deadlock. Cleaning current operation and terminating RHS process.
00001cec.00001d10::2016/07/21-01:46:15.342 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
00001b24.00001b48::2016/07/21-01:46:15.342 ERR   [RHS] Resource Virtual Machine BHRMPRDBWDB handling deadlock. Cleaning current operation and terminating RHS process.
00001cec.00001d10::2016/07/21-01:46:15.342 ERR   [RHS] Resource Virtual Machine BHRMPRDPTAP2 handling deadlock. Cleaning current operation and terminating RHS process.
00000f08.00003a08::2016/07/21-01:46:15.342 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine BHRMPRDERPDB', gen(0) result 4/0.
00000f08.0000224c::2016/07/21-01:46:15.342 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine BHRMPRDPTCI', gen(0) result 4/0.
00001d18.00001d3c::2016/07/21-01:46:15.342 ERR   [RHS] About to send WER report.
00000f08.00003a08::2016/07/21-01:46:15.342 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine BHRMPRDERPDB' consecutive failure count 1.
0000182c.00001ab0::2016/07/21-01:46:15.342 ERR   [RHS] About to send WER report.
00000f08.0000224c::2016/07/21-01:46:15.342 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine BHRMPRDPTCI' consecutive failure count 1.
00001d70.00001d94::2016/07/21-01:46:15.342 ERR   [RHS] About to send WER report.
00000f08.00002d2c::2016/07/21-01:46:15.342 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine BHRMPRDSM', gen(0) result 4/0.
00000f08.00003a08::2016/07/21-01:46:15.342 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine BHRMPRDBWDB', gen(0) result 4/0.
00000f08.00002d2c::2016/07/21-01:46:15.342 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine BHRMPRDSM' consecutive failure count 1.
00000f08.00003a08::2016/07/21-01:46:15.342 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine BHRMPRDBWDB' consecutive failure count 1.
00001b24.00001b48::2016/07/21-01:46:15.342 ERR   [RHS] About to send WER report.
00001cec.00001d10::2016/07/21-01:46:15.342 ERR   [RHS] About to send WER report.
00000f08.000017ac::2016/07/21-01:46:15.342 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine BHRMPRDPTAP2', gen(0) result 4/0.
00000f08.000017ac::2016/07/21-01:46:15.342 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine BHRMPRDPTAP2' consecutive failure count 1.
00001900.00001924::2016/07/21-01:46:15.452 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 16 milliseconds for resource 'Virtual Machine SAPRouter'.
00001900.00001924::2016/07/21-01:46:15.452 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
00001900.00001924::2016/07/21-01:46:15.452 ERR   [RHS] Resource Virtual Machine SAPRouter handling deadlock. Cleaning current operation and terminating RHS process.
00000f08.000017ac::2016/07/21-01:46:15.452 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine SAPRouter', gen(0) result 4/0.
00001900.00001924::2016/07/21-01:46:15.452 ERR   [RHS] About to send WER report.
00000f08.000017ac::2016/07/21-01:46:15.452 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine SAPRouter' consecutive failure count 1.
00001dcc.00001df0::2016/07/21-01:46:15.467 ERR   [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out by 15 milliseconds for resource 'Virtual Machine BHRMPRDWD2'.
00001dcc.00001df0::2016/07/21-01:46:15.467 INFO  [RHS] Enabling RHS termination watchdog with timeout 1680000 and recovery action 3 from source 5.
00001dcc.00001df0::2016/07/21-01:46:15.467 ERR   [RHS] Resource Virtual Machine BHRMPRDWD2 handling deadlock. Cleaning current operation and terminating RHS process.
00000f08.000017ac::2016/07/21-01:46:15.467 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'Virtual Machine BHRMPRDWD2', gen(0) result 4/0.
00000f08.000017ac::2016/07/21-01:46:15.467 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'Virtual Machine BHRMPRDWD2' consecutive failure count 1.
00001dcc.00001df0::2016/07/21-01:46:15.467 ERR   [RHS] About to send WER report.
00000f08.000017ac::2016/07/21-01:46:31.970 INFO  [DCM] HandleSweeperRecheck
00000f08.000017ac::2016/07/21-01:46:31.970 INFO  [CLI] LsaCallAuthenticationPackage: 0, 0 size: 4, buffer: HDL( 842c430000 )
00000f08.00000834::2016/07/21-01:46:31.970 INFO  [DCM] HandleRequest: dcm/connectivityCheck
00000f08.000017ac::2016/07/21-01:46:31.970 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume2\ => \\?\Volume{9e8c0ad8-b0dd-43e1-a1ea-686bdf4aa582}\
00000f08.000017ac::2016/07/21-01:46:31.970 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume1\ => \\?\Volume{baf4e645-bc8b-4a6c-8881-98cbeb2df054}\
00001a24.00004a18::2016/07/21-01:46:51.018 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00001a24.00002dec::2016/07/21-01:47:51.030 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00002914::2016/07/21-01:48:12.877 ERR   [STM] mscs::stm::STM::ProcessMessageWorker@610 had !ERROR! 5910
000018f8.0000192c::2016/07/21-01:48:12.893 ERR   [RHS] RhsCall::DeadlockMonitor: Call RESOURCETYPECONTROL timed out by 16 milliseconds for resource '<none>'.
000018f8.0000192c::2016/07/21-01:48:12.893 INFO  [RHS] Enabling RHS termination watchdog with timeout 1200000 and recovery action 3 from source 6.
000018f8.0000192c::2016/07/21-01:48:12.893 ERR   [RHS] Resource Type Storage Pool handling deadlock. Cleaning current operation and terminating RHS process.
000018f8.0000192c::2016/07/21-01:48:12.893 ERR   [RHS] About to send WER report.
00000f08.00004524::2016/07/21-01:48:14.209 ERR   [STM] mscs::stm::STM::ProcessMessageWorker@610 had !ERROR! 5910
00001a24.00004290::2016/07/21-01:48:51.031 INFO  [RES] Physical Disk <Cluster Disk 3>: VolumeIsNtfs: Volume \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition2\ has FS type NTFS
00000f08.00001814::2016/07/21-01:49:32.017 INFO  [DCM] HandleSweeperRecheck
00000f08.00001814::2016/07/21-01:49:32.017 INFO  [CLI] LsaCallAuthenticationPackage: 0, 0 size: 4, buffer: HDL( 842c430000 )
00000f08.00000834::2016/07/21-01:49:32.017 INFO  [DCM] HandleRequest: dcm/connectivityCheck
00000f08.00001814::2016/07/21-01:49:32.017 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume2\ => \\?\Volume{9e8c0ad8-b0dd-43e1-a1ea-686bdf4aa582}\
00000f08.00001814::2016/07/21-01:49:32.017 INFO  [DCM] SetVolumeMountPoint C:\ClusterStorage\Volume1\ => \\?\Volume{baf4e645-bc8b-4a6c-8881-98cbeb2df054}\
00000eec.00000ef0::2016/07/21-02:55:40.273 INFO  -----------------------------+ LOG BEGIN +-----------------------------
00000eec.00000ef0::2016/07/21-02:55:40.273 INFO  [CS] Starting clussvc as a service
00000eec.00000ef0::2016/07/21-02:55:40.273 INFO  [CS] cluster service logging level is 5
00000eec.00000f34::2016/07/21-02:55:40.273 INFO  [CS] Creating cluster node <vector len='1'>
00000eec.00000f34::2016/07/21-02:55:40.273 INFO      <item>ClusSvc</item>
00000eec.00000f34::2016/07/21-02:55:40.273 INFO  </vector>
00000eec.00000f44::2016/07/21-02:55:40.321 INFO  [StartupConfig]: Initializing.

---------------------------------------------------------------------------------------------------------------------------------------------------------

 

What happens if cluster core resource goes offline?

$
0
0

Hi,

I am just curious to know what happens if cluster core resources goes offline / failed state in production environment? Will entire cluster goes offline? Will it affect other cluster roles (SQL, exchange etc..)? Will virtual machines goes down or failover to another node? I searched in internet but could not get any answer.

-Umesh.S.K

Viewing all 201 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>