Quantcast
Channel: BladeSystem - General topics
Viewing all 901 articles
Browse latest View live

BL460c G8 Fail error

$
0
0

Hi everyone.

Today was failed one of my blade BL460c Gen.8

In log i have an error described below:

185   Power 01/01/1970 03:01 02/04/2019 05:33 7 System Power Fault Detected (XR: 10 20 MID: FF CD FC D6 02 10 10 AA 00 00 08 EE 02 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00)

 

Please explain me whats happen  and how i can restore him

Thanks.


c7000 EOSL and Roadmap

$
0
0

Hi Experts

please help me with the c7000 Blade chassis Roadmap and EOSL.

We have some existing blade chassis c7000 in our DC and we want to expand more enclosure with same chassis type.

If there is no EOSL, and roadmap is available for c7000 then we can go with this.

blade systems

$
0
0
hi we have a hp blade c-3000 and we have an alarm on it that says replace OA battery but i cant find the location of battery on OA does anyone know the location of battery thanks

C3000 enclosure with 4 * BL460c GEN8 and 3 * BL460C GEN10

$
0
0
We have a problem with a C3000 enclosure and 4 * BL460C GEN8 and 3 * BL460c GEN10
If I have the GEN8 server on.
I can turn two GEN10 servers on without problems but if the third GEN10 server power on,
two of the three GEN10 servers go power off with a Device Operational Error power off.
In the IML log we get the following message
Rack Infrastructure 07/16/2019 13:26:43 07/16/2019 13:26:43
1 Server Blade Enclosure Power Request Denied: Enclosure Busy
(Enclosure Serial Number Cxxxxxxxx, Bay 2)

OA firmware is 4.90 power managemeent 1.04 feb 2019.
The question is is this supported configuration or have we a other problemen.
The customer uses multiple C3000 with 6* GEN8 and 2* GEN10 servers without problems

 

 
 

 

install Automatically os, on all Bays

$
0
0

Hi,
How to install automatically an operating system on HP Blade Server Bays?

Updating C7000 OA Firmware from cli and mounted custom SPP

$
0
0
Trying to update the OA firmware from a script that logs into OA, mounts SPP Custom ISO from a web server, then performs firmware update of all blades, servers, the OA, and the C7000 Interconnects (gigabit and IB switch). When issuing 'update firmware server all' the blades are booted from the mounted ISO and perform automatic offline firmware update (as I want) but nothing is updated on the bladesystem - not the OA, not the Interconnects. Current OA fw version is 4.80. Custom 2019.03.1 ISO contains OA fw version 4.90 rpm. I know I can update the fw from the OA browser interface, but I'm looking to create a script to perform offline update on C7000, BL460c g10 blades, DL360 g10 and DL380 g10 servers all at once. I already have an online solution using 'smartupdate', now need an offline solution (boot from SPP ISO and perform automatic fw update, reboot). Anybody have any success updating C7000 OA Firware using the command line interface?

HP C7000 internal network with all VLANs allowed

$
0
0

Hi,

I am trying to create a internal network between two blades with all VLANs allowed between then. Which allows the software to control the VLAN assignment. Using Flex10/10D Virtual Connect .

I tried creating a "Ethernet network". But that did not work. Untagged connectivity works but VLANs do not.

Then I tried a shared uplink set without any ports, but I could not find a way to specify 'any-VLAN' or VLAN Tunneling when external ports are not added.

I have looked at the VC-Cookbook and the VC-User-guide to no avail. 

Thanks in advance

Pratik

HP C7000 Onboad Administrator absent

$
0
0

Hi!

I have a problem after upgrading the OA firmware: there was 2 OA (active+standby), but after flashing only one went online. The other one cannot be reached even with null-modem cable, also, there is a black screen when connecting monitor directly to this OA. Is there any way to make a hard-reset, or any other ways to understand what's the problem with this module?


HP C7000 AC redundant mode question

$
0
0

Hi

We currently have several C7000 enclosures which are configured as below for power redundancy, 

All are set to AC redundant, with dynamic power enabled.

PSUs 1,3,5 are on AC feed A.

PSUs 2,4,6 are on AC feed B

Tthe documentation I've reviewed states this should be 1,2,3 on feed A and 4,5,6 on feed B.

Is our current setup fully redundant, or do we need to change from 1,3,5 -> A 2,4,6 ->B to 1,2,3 -> A 4,5,6 -> B?

Thank you

DL360p Gen 8 Ethernet thru iLo4 dedicated port

$
0
0

I have several DL360p blades that came with 10gb 2 port LAN cards. They are awesome, but total overkill for my small network, which is running entirely on cat.6 cables and cisco catalyst 3750 router.  I am fine with the speeds of my network and do not want to waste time/money over finding the slower 4port 1gb LAN HPE component. I've read there is a way to use a single RJ45 port to host an ethernet and lights out management subsystem. 

So my question: Is there a way to run the network traffic/internet through the dedicated iLo port? I searched quite a lot and did not come to a clear solution. I came to this https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00045378en_us&docLocale=en_US. As I understand it the server can use one port for the two functionalities, but it cannot be the iLo port?

Any suggestions?

 

Can't get 10Gig networking working with BL860c i2 blade in a c3000 chassis

$
0
0

I have recently purchased two BL860c i2 blades and a BL860c i1 blade in a c3000 chassis.   These are Itanium and are running HPUX 11i v3.   The two BL860c i2 blades have an embeded 10Gig network card that shows 16 NICs.   The c3000 has a network passthru module that provides 4 ports to each blade.   I have a 1gig network module in Mezz2 and that goes to a 2nd passthru module. 

I am setting this system up for the first time and I'm running through the hardware makeing sure that everything works.   The SAN works, the 1gig network works.  But when I configure the 10Gig networking, It does not work.

The first 10Gig port is on lan0.  It shows UP when I plug it into a switch, and down when I remove the connection. So I have confirmed that I am in the right port.  When I have blade1 and blade2 10Gig connections plugged into a flat Nexus switch, they can't ping each other.   I then plugged both of them into a Cisco 6509 just incase they didn't like the nexus switch. It still didn't work.   I have two Solaris system with 10Gig NICs that I used as a control group.  I plugged them into the same switches.   They can ping each other, but not the BL860c i2 systems. 

I then took a small fc cable and connected the two BL860c i2 servers to each other without a switch between them.   This worked, they could ping each other.    I then connected a Solaris server directly to one of the BL860c i2 servers.  No ping.

So I have these two BL860c servers that can only ping when they are conencted directly to each other, but fail when connected to any switch or directly to other servers.    I am hoping that there is some 10G setting that I do not know about that I need to use to get this working.   I just used defaults and smh to configure the ports.

# ioscan -funC lan Class I H/W Path Driver S/W State H/W Type Description ======================================================================== lan 0 0/0/0/3/0/0/0 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 1 0/0/0/3/0/0/1 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 2 0/0/0/3/0/0/2 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 3 0/0/0/3/0/0/3 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 4 0/0/0/3/0/0/4 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 5 0/0/0/3/0/0/5 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 6 0/0/0/3/0/0/6 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 7 0/0/0/3/0/0/7 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 8 0/0/0/4/0/0/0 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 9 0/0/0/4/0/0/1 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 10 0/0/0/4/0/0/2 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 11 0/0/0/4/0/0/3 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 12 0/0/0/4/0/0/4 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 13 0/0/0/4/0/0/5 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 14 0/0/0/4/0/0/6 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 15 0/0/0/4/0/0/7 iexgbe CLAIMED INTERFACE HP PCIe 2-p 10GbE Built-in FLEX-10 lan 16 0/0/0/5/0/0/0/2/0/0/0 iether CLAIMED INTERFACE HP 447881-001 PCIe 4-port 1000Mb/s Mezzanine Adapter lan 17 0/0/0/5/0/0/0/2/0/0/1 iether CLAIMED INTERFACE HP 447881-001 PCIe 4-port 1000Mb/s Mezzanine Adapter lan 18 0/0/0/5/0/0/0/4/0/0/0 iether CLAIMED INTERFACE HP 447881-001 PCIe 4-port 1000Mb/s Mezzanine Adapter lan 19 0/0/0/5/0/0/0/4/0/0/1 iether CLAIMED INTERFACE HP 447881-001 PCIe 4-port 1000Mb/s Mezzanine Adapter # lanscan Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI Path Address In# State NamePPA ID Type Support Mjr# 0/0/0/3/0/0/0 0x643150008850 0 UP lan0 snap0 1 ETHER Yes 119 0/0/0/3/0/0/1 0x643150008854 1 UP lan1 snap1 2 ETHER Yes 119 0/0/0/3/0/0/2 0x643150008851 2 UP lan2 snap2 3 ETHER Yes 119 0/0/0/3/0/0/3 0x643150008855 3 UP lan3 snap3 4 ETHER Yes 119 0/0/0/3/0/0/4 0x643150008852 4 UP lan4 snap4 5 ETHER Yes 119 0/0/0/3/0/0/5 0x643150008856 5 UP lan5 snap5 6 ETHER Yes 119 0/0/0/3/0/0/6 0x643150008853 6 UP lan6 snap6 7 ETHER Yes 119 0/0/0/3/0/0/7 0x643150008857 7 UP lan7 snap7 8 ETHER Yes 119 0/0/0/4/0/0/0 0x643150008858 8 UP lan8 snap8 9 ETHER Yes 119 0/0/0/4/0/0/1 0x64315000885C 9 UP lan9 snap9 10 ETHER Yes 119 0/0/0/4/0/0/2 0x643150008859 10 UP lan10 snap10 11 ETHER Yes 119 0/0/0/4/0/0/3 0x64315000885D 11 UP lan11 snap11 12 ETHER Yes 119 0/0/0/4/0/0/4 0x64315000885A 12 UP lan12 snap12 13 ETHER Yes 119 0/0/0/4/0/0/5 0x64315000885E 13 UP lan13 snap13 14 ETHER Yes 119 0/0/0/4/0/0/6 0x64315000885B 14 UP lan14 snap14 15 ETHER Yes 119 0/0/0/4/0/0/7 0x64315000885F 15 UP lan15 snap15 16 ETHER Yes 119 0/0/0/5/0/0/0/2/0/0/0 0x0025B3B44464 16 UP lan16 snap16 17 ETHER Yes 119 0/0/0/5/0/0/0/2/0/0/1 0x0025B3B44465 17 UP lan17 snap17 18 ETHER Yes 119 0/0/0/5/0/0/0/4/0/0/0 0x0025B3B44466 18 UP lan18 snap18 19 ETHER Yes 119 0/0/0/5/0/0/0/4/0/0/1 0x0025B3B44467 19 UP lan19 snap19 20 ETHER Yes 119 LinkAgg0 0x000000000000 900 DOWN lan900 snap900 22 ETHER Yes 119 LinkAgg1 0x000000000000 901 DOWN lan901 snap901 23 ETHER Yes 119 LinkAgg2 0x000000000000 902 DOWN lan902 snap902 24 ETHER Yes 119 LinkAgg3 0x000000000000 903 DOWN lan903 snap903 25 ETHER Yes 119 LinkAgg4 0x000000000000 904 DOWN lan904 snap904 26 ETHER Yes 119 # smh SMH-> Networking and Communications-> Network Interfaces Configuration-> Network Interface Cards ------------------------------------------------------------------------------------------------------------------------------------------------------------- Interface Name Subsystem Hardware Path Interface State Interface Type IPv4 Address IPv6 Address ------------------------------------------------------------------------------------------------------------------------------------------------------------- lan0 iexgbe 0/0/0/3/0/0/0 up 10GBASE-KR 192.168.5.26 Not Configured lan1 iexgbe 0/0/0/3/0/0/1 up 10GBASE-KR 192.168.4.26 Not Configured lan2 iexgbe 0/0/0/3/0/0/2 down 10GBASE-KR Not Configured Not Configured lan3 iexgbe 0/0/0/3/0/0/3 down 10GBASE-KR Not Configured Not Configured lan4 iexgbe 0/0/0/3/0/0/4 down 10GBASE-KR Not Configured Not Configured lan5 iexgbe 0/0/0/3/0/0/5 up 10GBASE-KR Not Configured Not Configured lan6 iexgbe 0/0/0/3/0/0/6 down 10GBASE-KR Not Configured Not Configured lan7 iexgbe 0/0/0/3/0/0/7 down 10GBASE-KR Not Configured Not Configured lan8 iexgbe 0/0/0/4/0/0/0 up 10GBASE-KR Not Configured Not Configured lan9 iexgbe 0/0/0/4/0/0/1 up 10GBASE-KR Not Configured Not Configured lan10 iexgbe 0/0/0/4/0/0/2 down 10GBASE-KR Not Configured Not Configured lan11 iexgbe 0/0/0/4/0/0/3 down 10GBASE-KR Not Configured Not Configured lan12 iexgbe 0/0/0/4/0/0/4 up 10GBASE-KR Not Configured Not Configured lan13 iexgbe 0/0/0/4/0/0/5 down 10GBASE-KR Not Configured Not Configured lan14 iexgbe 0/0/0/4/0/0/6 down 10GBASE-KR Not Configured Not Configured lan15 iexgbe 0/0/0/4/0/0/7 down 10GBASE-KR Not Configured Not Configured lan16 iether 0/0/0/5/0/0/0/2/0/0/0 up 1000Mb/s 192.168.16.180 Not Configured lan17 iether 0/0/0/5/0/0/0/2/0/0/1 down 1000Mb/s Not Configured Not Configured lan18 iether 0/0/0/5/0/0/0/4/0/0/0 down 1000Mb/s Not Configured Not Configured lan19 iether 0/0/0/5/0/0/0/4/0/0/1 down 1000Mb/s Not Configured Not Configured -------------------------------------------------------------------------------------------------------------------------------------------------------------

 

 

WS460c Gen8 not detecting AMD FirePro S4000X Mezzanine

$
0
0

My WS460c Gen8 is not detecting the AMD FirePro S4000X in Windows 7 sp1 x64; In device manager, I am seeing the integrated Matrox G200eH under display adapters but no other devices (nor unknown). In OA (c7000), I can see a Mezzanine card in slot 2 (HP MXM Mezzanine Type-B). Tried the followings:

Installed HP PSP 9.70, tested a second adapter: no change. Same problem with Win10. Latest Bios installed (2019): same issue. The strange behavior of the workstation: setting User mode in Bios is not working: if set user mode, the front display still shown (GUI).

C7000 Page Load error

$
0
0

Good day

I am running the OA 4.8 on a C7000 blade enclosure. after I log into the OA I only get half the information, I get a page load error on two parts of the screen. the error reads "Failed to execute 'appendChild' on 'Node': parameter 1 is not of type 'Node'.". I have tried three different browsers and running Java 8 update 231. Anybody got an Idea???

Thanks

blade bl460c g8 cache module error

$
0
0

i have alarom on balde bl460c g8 that is degraded and there is cache module satatus failed .

IML log -> [post error 1800-slot X drive array cache module super-cap is charging caching will be enabled once super-cap has been recharged . No action is required]

Systen information - storage ->

controller status : ok

model :hp smart array p220i controller

cache module status : failed

log report no action required but error still since long time .

Device is reporting an internal degraded status

$
0
0

I am getting degraded  error in two of my blade servers

IML logs

Main Memory  01/06/2020   10:37   01/06/2020  10:37   1   Corrected Memory Error threshold exceeded ((Processor 2, Memory Module 8))
Rack Infrastructure  01/08/2020   14:59   01/08/2020  14:42  2  Chassis Enclosure Serial Number CZ3321EPAC requires minimum firmware revision 03.50. It is currently 04.40.

 


Server getting down

$
0
0

Hello 

My blade server restarting unexpectedly. 

Syslog

``````

Jan 9 16:46:16 stblade1 kernel: [362355.616139] mce_notify_irq: 7 callbacks suppressed
Jan 9 16:46:16 stblade1 kernel: [362355.616169] mce: [Hardware Error]: Machine check events logged
Jan 9 16:46:16 stblade1 kernel: [362355.616350] mce: [Hardware Error]: Machine check events logged
Jan 9 16:46:26 stblade1 systemd[1]: session-166.scope: Succeeded.
Jan 9 16:47:00 stblade1 systemd[1]: Starting Proxmox VE replication runner...
Jan 9 16:47:02 stblade1 systemd[1]: pvesr.service: Succeeded.
Jan 9 16:47:02 stblade1 systemd[1]: Started Proxmox VE replication runner.
Jan 9 16:48:00 stblade1 systemd[1]: Starting Proxmox VE replication runner...
Jan 9 16:48:02 stblade1 systemd[1]: pvesr.service: Succeeded.
Jan 9 16:48:02 stblade1 systemd[1]: Started Proxmox VE replication runner.
Jan 9 16:49:00 stblade1 kernel: [362519.457113] mce_notify_irq: 25 callbacks suppressed
Jan 9 16:49:00 stblade1 kernel: [362519.457133] mce: [Hardware Error]: Machine check events logged
Jan 9 16:49:00 stblade1 kernel: [362519.457231] mce: [Hardware Error]: Machine check events logged
Jan 9 16:49:00 stblade1 systemd[1]: Starting Proxmox VE replication runner...
Jan 9 16:49:02 stblade1 systemd[1]: pvesr.service: Succeeded.
Jan 9 16:49:02 stblade1 systemd[1]: Started Proxmox VE replication runner.
Jan 9 16:50:00 stblade1 systemd[1]: Starting Proxmox VE replication runner...
Jan 9 16:50:02 stblade1 systemd[1]: pvesr.service: Succeeded.
Jan 9 16:50:02 stblade1 systemd[1]: Started Proxmox VE replication runner.
Jan 9 16:51:00 stblade1 systemd[1]: Starting Proxmox VE replication runner...
Jan 9 16:51:02 stblade1 systemd[1]: pvesr.service: Succeeded.
Jan 9 16:51:02 stblade1 systemd[1]: Started Proxmox VE replication runner.
Jan 9 16:51:27 stblade1 kernel: [362666.914445] mce_notify_irq: 7 callbacks suppressed
Jan 9 16:51:27 stblade1 kernel: [362666.914484] mce: [Hardware Error]: Machine check events logged
Jan 9 16:51:27 stblade1 kernel: [362666.914622] mce: [Hardware Error]: Machine check events logged
Jan 9 16:51:47 stblade1 kernel: [362686.801954] general protection fault: 0000 [#1] SMP PTI
Jan 9 16:51:47 stblade1 kernel: [362686.812358] CPU: 8 PID: 7550 Comm: sh Tainted: P O 5.0.15-1-pve #1
Jan 9 16:51:47 stblade1 kernel: [362686.815825] Hardware name: HP ProLiant BL460c Gen8, BIOS I31 03/01/2013
Jan 9 16:51:47 stblade1 kernel: [362686.818550] RIP: 0010:copy_process.part.38+0x1ac/0x1fc0
Jan 9 16:51:47 stblade1 kernel: [362686.821359] Code: d2 65 48 8b 05 45 24 b8 47 65 48 0f b1 15 3c 24 b8 47 75 f5 48 85 c0 48 89 c1 49 89 c0 4c 8b 95 60 ff ff ff 0f 84 d0 06 00 00 <49> 8b 78 08 31 f6 ba 00 40 00 00 4c 89 95 58 ff ff ff 4c 89 85 60
Jan 9 16:51:47 stblade1 kernel: [362686.825602] RSP: 0018:ffffb528478b7d90 EFLAGS: 00010286
Jan 9 16:51:47 stblade1 kernel: [362686.827500] RAX: fffd9787d80c47c0 RBX: ffff978971fadc00 RCX: fffd9787d80c47c0
Jan 9 16:51:47 stblade1 kernel: [362686.829128] RDX: 0000000000000000 RSI: 00000000006000c0 RDI: ffff9789a083a880
Jan 9 16:51:47 stblade1 kernel: [362686.830727] RBP: ffffb528478b7e80 R08: fffd9787d80c47c0 R09: 0000000000000000
Jan 9 16:51:47 stblade1 kernel: [362686.832011] R10: ffff9788dca68000 R11: 0000000000000000 R12: 0000000001200011
Jan 9 16:51:47 stblade1 kernel: [362686.833298] R13: 0000000000000000 R14: 00007fd74f4b4850 R15: 00000000ffffffff
Jan 9 16:51:47 stblade1 kernel: [362686.834685] FS: 00007fd74f4b4580(0000) GS:ffff9789a7400000(0000) knlGS:0000000000000000
Jan 9 16:51:47 stblade1 kernel: [362686.836024] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 9 16:51:47 stblade1 kernel: [362686.837355] CR2: 00007fd74f4492c0 CR3: 0000000a3f582004 CR4: 00000000000626e0
Jan 9 16:51:47 stblade1 kernel: [362686.838787] Call Trace:
Jan 9 16:51:47 stblade1 kernel: [362686.840167] ? _copy_to_user+0x2b/0x40
Jan 9 16:51:47 stblade1 kernel: [362686.841522] ? cp_new_stat+0x152/0x180
Jan 9 16:51:47 stblade1 kernel: [362686.843028] _do_fork+0xf8/0x400
Jan 9 16:51:47 stblade1 kernel: [362686.844363] __x64_sys_clone+0x27/0x30
Jan 9 16:51:47 stblade1 kernel: [362686.845777] do_syscall_64+0x5a/0x110
Jan 9 16:51:47 stblade1 kernel: [362686.847112] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 9 16:51:47 stblade1 kernel: [362686.848417] RIP: 0033:0x7fd74f3b87be
Jan 9 16:51:47 stblade1 kernel: [362686.849798] Code: db 0f 85 25 01 00 00 64 4c 8b 0c 25 10 00 00 00 45 31 c0 4d 8d 91 d0 02 00 00 31 d2 31 f6 bf 11 00 20 01 b8 38 00 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 b6 00 00 00 41 89 c4 85 c0 0f 85 c3 00 00
Jan 9 16:51:47 stblade1 kernel: [362686.852522] RSP: 002b:00007fffeb22cff0 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
Jan 9 16:51:47 stblade1 kernel: [362686.854041] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fd74f3b87be
Jan 9 16:51:47 stblade1 kernel: [362686.855481] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
Jan 9 16:51:47 stblade1 kernel: [362686.856928] RBP: 0000000000000000 R08: 0000000000000000 R09: 00007fd74f4b4580
Jan 9 16:51:47 stblade1 kernel: [362686.858489] R10: 00007fd74f4b4850 R11: 0000000000000246 R12: 00005596294e4b48
Jan 9 16:51:47 stblade1 kernel: [362686.859980] R13: 0000000000000000 R14: 00007fffeb22d0b0 R15: 0000000000000002
Jan 9 16:51:47 stblade1 kernel: [362686.861467] Modules linked in: tcp_diag inet_diag dm_snapshot arc4 md4 cmac nls_utf8 cifs ccm fscache veth ebtable_filter ebtables ip_set ip6table_filter ip6_tables iptable_filter bpfilter softdog nfnetlink_log nfnetlink intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ipmi_ssif ghash_clmulni_intel aesni_intel aes_x86_64 crypto_simd cryptd zfs(PO) glue_helper zunicode(PO) intel_cstate zlua(PO) snd_pcm mgag200 snd_timer ttm snd soundcore intel_rapl_perf drm_kms_helper serio_raw pcspkr joydev input_leds drm i2c_algo_bit fb_sys_fops syscopyarea sysfillrect sysimgblt hpilo ioatdma dca ipmi_si ipmi_devintf ipmi_msghandler mac_hid acpi_power_meter zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio
Jan 9 16:51:47 stblade1 kernel: [362686.861704] hid_generic usbmouse usbkbd usbhid hid psmouse lpc_ich bnx2x hpsa mdio scsi_transport_sas libcrc32c video
Jan 9 16:51:47 stblade1 kernel: [362686.877429] ---[ end trace 770c837b12982041 ]---
Jan 9 16:51:47 stblade1 kernel: [362686.879496] RIP: 0010:copy_process.part.38+0x1ac/0x1fc0
Jan 9 16:51:47 stblade1 kernel: [362686.881383] Code: d2 65 48 8b 05 45 24 b8 47 65 48 0f b1 15 3c 24 b8 47 75 f5 48 85 c0 48 89 c1 49 89 c0 4c 8b 95 60 ff ff ff 0f 84 d0 06 00 00 <49> 8b 78 08 31 f6 ba 00 40 00 00 4c 89 95 58 ff ff ff 4c 89 85 60
Jan 9 16:51:47 stblade1 kernel: [362686.885837] RSP: 0018:ffffb528478b7d90 EFLAGS: 00010286
Jan 9 16:51:47 stblade1 kernel: [362686.887861] RAX: fffd9787d80c47c0 RBX: ffff978971fadc00 RCX: fffd9787d80c47c0
Jan 9 16:51:47 stblade1 kernel: [362686.890127] RDX: 0000000000000000 RSI: 00000000006000c0 RDI: ffff9789a083a880
Jan 9 16:51:47 stblade1 kernel: [362686.892174] RBP: ffffb528478b7e80 R08: fffd9787d80c47c0 R09: 0000000000000000
Jan 9 16:51:47 stblade1 kernel: [362686.894354] R10: ffff9788dca68000 R11: 0000000000000000 R12: 0000000001200011
Jan 9 16:51:47 stblade1 kernel: [362686.896369] R13: 0000000000000000 R14: 00007fd74f4b4850 R15: 00000000ffffffff
Jan 9 16:51:47 stblade1 kernel: [362686.898564] FS: 00007fd74f4b4580(0000) GS:ffff9789a7400000(0000) knlGS:0000000000000000
Jan 9 16:51:47 stblade1 kernel: [362686.900580] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 9 16:51:47 stblade1 kernel: [362686.902718] CR2: 00007fd74f4492c0 CR3: 0000000a3f582004 CR4: 00000000000626e0
Jan 9 16:52:00 stblade1 systemd[1]: Starting Proxmox VE replication runner...
Jan 9 16:52:01 stblade1 systemd[1]: pvesr.service: Succeeded.
Jan 9 16:52:01 stblade1 systemd[1]: Started Proxmox VE replication runner.
Jan 9 16:52:01 stblade1 kernel: [362700.911218] general protection fault: 0000 [#2] SMP PTI
Jan 9 16:52:01 stblade1 kernel: [362700.937953] CPU: 8 PID: 7674 Comm: run-parts Tainted: P D O 5.0.15-1-pve #1
Jan 9 16:52:01 stblade1 kernel: [362700.942433] Hardware name: HP ProLiant BL460c Gen8, BIOS I31 03/01/2013
Jan 9 16:52:01 stblade1 kernel: [362700.945378] RIP: 0010:copy_process.part.38+0x1ac/0x1fc0
Jan 9 16:52:01 stblade1 kernel: [362700.948200] Code: d2 65 48 8b 05 45 24 b8 47 65 48 0f b1 15 3c 24 b8 47 75 f5 48 85 c0 48 89 c1 49 89 c0 4c 8b 95 60 ff ff ff 0f 84 d0 06 00 00 <49> 8b 78 08 31 f6 ba 00 40 00 00 4c 89 95 58 ff ff ff 4c 89 85 60
Jan 9 16:52:01 stblade1 kernel: [362700.952764] RSP: 0018:ffffb528474e7d90 EFLAGS: 00010206
Jan 9 16:52:01 stblade1 kernel: [362700.954781] RAX: 000800000003b000 RBX: ffff978972a34500 RCX: 0000000000000000
Jan 9 16:52:01 stblade1 kernel: [362700.956655] RDX: 0000000000000000 RSI: 00000000006000c0 RDI: ffff9789861d7a80
Jan 9 16:52:01 stblade1 kernel: [362700.958507] RBP: ffffb528474e7e80 R08: 000800000003b000 R09: 0000000000000000
Jan 9 16:52:01 stblade1 kernel: [362700.960206] R10: ffff97834b39c500 R11: 0000000000000000 R12: 0000000001200011
Jan 9 16:52:01 stblade1 kernel: [362700.961928] R13: 0000000000000000 R14: 00007fcb19557a10 R15: 00000000ffffffff
Jan 9 16:52:01 stblade1 kernel: [362700.963753] FS: 00007fcb19557740(0000) GS:ffff9789a7400000(0000) knlGS:0000000000000000
Jan 9 16:52:01 stblade1 kernel: [362700.965519] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 9 16:52:01 stblade1 kernel: [362700.967362] CR2: 00000000010f9118 CR3: 0000000aa2d54002 CR4: 00000000000626e0
Jan 9 16:52:01 stblade1 kernel: [362700.969165] Call Trace:
Jan 9 16:52:01 stblade1 kernel: [362700.971034] ? security_file_alloc+0x4e/0x90
Jan 9 16:52:01 stblade1 kernel: [362700.972788] _do_fork+0xf8/0x400
Jan 9 16:52:01 stblade1 kernel: [362700.974570] ? __secure_computing+0x3e/0xd0
Jan 9 16:52:01 stblade1 kernel: [362700.976261] ? syscall_trace_enter+0x196/0x2b0
Jan 9 16:52:01 stblade1 kernel: [362700.977913] __x64_sys_clone+0x27/0x30
Jan 9 16:52:01 stblade1 kernel: [362700.979550] do_syscall_64+0x5a/0x110
Jan 9 16:52:01 stblade1 kernel: [362700.981230] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 9 16:52:01 stblade1 kernel: [362700.982898] RIP: 0033:0x7fcb18c08922
Jan 9 16:52:01 stblade1 kernel: [362700.984547] Code: f7 d8 64 89 04 25 d4 02 00 00 64 4c 8b 04 25 10 00 00 00 31 d2 4d 8d 90 d0 02 00 00 31 f6 bf 11 00 20 01 b8 38 00 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 5d 01 00 00 85 c0 41 89 c5 0f 85 67 01 00
Jan 9 16:52:01 stblade1 kernel: [362700.987959] RSP: 002b:00007fff191b4da0 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
Jan 9 16:52:01 stblade1 kernel: [362700.989627] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fcb18c08922
Jan 9 16:52:01 stblade1 kernel: [362700.991271] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
Jan 9 16:52:01 stblade1 kernel: [362700.992880] RBP: 00007fff191b4dc0 R08: 00007fcb19557740 R09: 0000000000000000
Jan 9 16:52:01 stblade1 kernel: [362700.994444] R10: 00007fcb19557a10 R11: 0000000000000246 R12: 0000000000000000
Jan 9 16:52:01 stblade1 kernel: [362700.996028] R13: 0000000000000000 R14: 0000000000000001 R15: 00007fff191b51fc
Jan 9 16:52:01 stblade1 kernel: [362700.997535] Modules linked in: tcp_diag inet_diag dm_snapshot arc4 md4 cmac nls_utf8 cifs ccm fscache veth ebtable_filter ebtables ip_set ip6table_filter ip6_tables iptable_filter bpfilter softdog nfnetlink_log nfnetlink intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ipmi_ssif ghash_clmulni_intel aesni_intel aes_x86_64 crypto_simd cryptd zfs(PO) glue_helper zunicode(PO) intel_cstate zlua(PO) snd_pcm mgag200 snd_timer ttm snd soundcore intel_rapl_perf drm_kms_helper serio_raw pcspkr joydev input_leds drm i2c_algo_bit fb_sys_fops syscopyarea sysfillrect sysimgblt hpilo ioatdma dca ipmi_si ipmi_devintf ipmi_msghandler mac_hid acpi_power_meter zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio
Jan 9 16:52:01 stblade1 kernel: [362700.997955] hid_generic usbmouse usbkbd usbhid hid psmouse lpc_ich bnx2x hpsa mdio scsi_transport_sas libcrc32c video
Jan 9 16:52:01 stblade1 kernel: [362701.012086] ---[ end trace 770c837b12982042 ]---
Jan 9 16:52:01 stblade1 kernel: [362701.013810] RIP: 0010:copy_process.part.38+0x1ac/0x1fc0
Jan 9 16:52:01 stblade1 kernel: [362701.015706] Code: d2 65 48 8b 05 45 24 b8 47 65 48 0f b1 15 3c 24 b8 47 75 f5 48 85 c0 48 89 c1 49 89 c0 4c 8b 95 60 ff ff ff 0f 84 d0 06 00 00 <49> 8b 78 08 31 f6 ba 00 40 00 00 4c 89 95 58 ff ff ff 4c 89 85 60
Jan 9 16:52:01 stblade1 kernel: [362701.019657] RSP: 0018:ffffb528478b7d90 EFLAGS: 00010286
Jan 9 16:52:01 stblade1 kernel: [362701.021366] RAX: fffd9787d80c47c0 RBX: ffff978971fadc00 RCX: fffd9787d80c47c0
Jan 9 16:52:01 stblade1 kernel: [362701.023107] RDX: 0000000000000000 RSI: 00000000006000c0 RDI: ffff9789a083a880
Jan 9 16:52:02 stblade1 kernel: [362701.024806] RBP: ffffb528478b7e80 R08: fffd9787d80c47c0 R09: 0000000000000000
Jan 9 16:52:02 stblade1 kernel: [362701.026571] R10: ffff9788dca68000 R11: 0000000000000000 R12: 0000000001200011
Jan 9 16:52:02 stblade1 kernel: [362701.028236] R13: 0000000000000000 R14: 00007fd74f4b4850 R15: 00000000ffffffff
Jan 9 16:52:02 stblade1 kernel: [362701.030022] FS: 00007fcb19557740(0000) GS:ffff9789a7400000(0000) knlGS:0000000000000000
Jan 9 16:52:02 stblade1 kernel: [362701.031694] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 9 16:52:02 stblade1 kernel: [362701.033310] CR2: 00000000010f9118 CR3: 0000000aa2d54002 CR4: 00000000000626e0
Jan 9 16:52:15 stblade1 kernel: [362714.420224] general protection fault: 0000 [#3] SMP PTI
Jan 9 16:52:15 stblade1 kernel: [362714.443211] CPU: 10 PID: 1071 Comm: ksmtuned Tainted: P D O 5.0.15-1-pve #1
Jan 9 16:52:15 stblade1 kernel: [362714.446920] Hardware name: HP ProLiant BL460c Gen8, BIOS I31 03/01/2013
Jan 9 16:52:15 stblade1 kernel: [362714.449446] RIP: 0010:copy_process.part.38+0x1ac/0x1fc0
Jan 9 16:52:15 stblade1 kernel: [362714.451807] Code: d2 65 48 8b 05 45 24 b8 47 65 48 0f b1 15 3c 24 b8 47 75 f5 48 85 c0 48 89 c1 49 89 c0 4c 8b 95 60 ff ff ff 0f 84 d0 06 00 00 <49> 8b 78 08 31 f6 ba 00 40 00 00 4c 89 95 58 ff ff ff 4c 89 85 60
Jan 9 16:52:15 stblade1 kernel: [362714.456432] RSP: 0018:ffffb52847a83d90 EFLAGS: 00010286
Jan 9 16:52:15 stblade1 kernel: [362714.458757] RAX: fffd9783a041b200 RBX: ffff9788d8018000 RCX: fffd9783a041b200
Jan 9 16:52:15 stblade1 kernel: [362714.460932] RDX: 0000000000000000 RSI: 00000000006000c0 RDI: ffff9789a1890300
Jan 9 16:52:15 stblade1 kernel: [362714.463134] RBP: ffffb52847a83e80 R08: fffd9783a041b200 R09: 0000000000000000
Jan 9 16:52:15 stblade1 kernel: [362714.465257] R10: ffff9789a0f50000 R11: 0000000000000000 R12: 0000000001200011
Jan 9 16:52:15 stblade1 kernel: [362714.467346] R13: 0000000000000000 R14: 00007fdd7bc76a10 R15: 00000000ffffffff
Jan 9 16:52:15 stblade1 kernel: [362714.469438] FS: 00007fdd7bc76740(0000) GS:ffff9789a7480000(0000) knlGS:0000000000000000
Jan 9 16:52:15 stblade1 kernel: [362714.471544] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 9 16:52:15 stblade1 kernel: [362714.473662] CR2: 00007fdd7be357f0 CR3: 0000000c20ada004 CR4: 00000000000626e0
Jan 9 16:52:15 stblade1 kernel: [362714.475794] Call Trace: 

Stupid Question

$
0
0

Hey there, I was using Dell Servers for many years now and I just bought a HP C7000 Bladesystem with 16 BL460c G6 blades.

My question was is it possible to take the 16 BL460C and put them together as one server like the resources are getting shared together?

If that is not possible can the storage be combine as one? like SAN?

 

I was reading on HP Virtual connect and it's pretty complicated..

What is the max BL460C Generation that it can have on a Blade Chassis Gen2?

 

 

 

Port 6: portcfgpersistentenable failed. Configuration is not capable(6).

$
0
0

Last weekend we add a BL460c gen10 blade into bay #6

Portcfgpersistentenable 6 failed and have no idea what we are doing wrong.

License is there, no errors seen in 'errdump'

Same for all other persistentdisabled ports.

However, port 15 for example I can persistentdisable and enable again.

Thanks in advance for your help!

FABRIC-10:> switchshow
switchName: FABRIC-10
switchType: 129.1
switchState: Online
switchMode: Access Gateway Mode
switchWwn: 10:00:50:eb:1a:9e:51:13
switchBeacon: OFF

Index Port Address Media Speed State Proto
==================================================
0 0 010000 -- N16 No_Module FC Disabled (Persistent)
1 1 010100 cu N16 Online FC F-Port 50:01:43:80:24:d3:38:b4 0x830801
2 2 010200 cu N16 Online FC F-Port 50:01:43:80:24:d3:3c:7c 0x83054c
3 3 010300 cu N16 Online FC F-Port 51:40:2e:c0:00:cf:08:c0 0x830803
4 4 010400 cu N16 No_SigDet FC
5 5 010500 cu N16 In_Sync FC Disabled (Persistent)
6 6 010600 cu N16 In_Sync FC Disabled (Persistent)
7 7 010700 cu N16 Online FC F-Port 51:40:2e:c0:00:cf:07:b4 0x830502
8 8 010800 cu N16 No_SigDet FC
9 9 010900 cu N16 Online FC F-Port 50:01:43:80:24:d3:3c:c0 0x830548
10 10 010a00 cu N16 Online FC F-Port 50:01:43:80:24:d3:3c:ac 0x830549
11 11 010b00 cu N16 Online FC F-Port 51:40:2e:c0:00:cf:08:c4 0x830501
12 12 010c00 cu N16 Online FC F-Port 51:40:2e:c0:00:cf:08:b8 0x83054e
13 13 010d00 cu N16 In_Sync FC Disabled (Persistent)
14 14 010e00 cu N16 In_Sync FC Disabled (Persistent)
15 15 010f00 cu N16 No_SigDet FC
16 16 011000 cu N16 In_Sync FC Disabled (Persistent)
17 17 011100 id N16 Online FC N-Port 10:00:50:eb:1a:73:81:01 0x830500 (AoQ)
18 18 011200 id N16 Online FC N-Port 10:00:50:eb:1a:73:81:01 0x830800 (AoQ)
19 19 011300 -- N16 No_Module FC Disabled (Persistent)
20 20 011400 -- N16 No_Module FC Disabled (Persistent)
21 21 011500 -- N16 No_Module FC Disabled (Persistent)
22 22 011600 -- N16 No_Module FC Disabled (Persistent)
23 23 011700 -- N16 No_Module FC Disabled (Persistent)
24 24 011800 -- N16 No_Module FC Disabled (Persistent)
25 25 011900 -- N16 No_Module FC Disabled (Persistent)
26 26 011a00 -- N16 No_Module FC Disabled (Persistent)
27 27 011b00 -- N16 No_Module FC Disabled (Persistent)

FABRIC-10:> portcfgpersistentenable 6
Port 6: portcfgpersistentenable failed. Configuration is not capable(6).

FABRIC-10:> licenseshow
CrFTLPTZCXFSBLQQQgamK3PMNFGNQJEBBSABJ:
Full Ports on Demand license - additional 12 port upgrade license
QYP7Q43RPtgGBM7FTMGAaNtDa7TfgPRaBJ4tA:
Extended Fabric license
Fabric Watch license
Performance Monitor license
Trunking license
Fabric Vision license
License Id:
10:00:50:eb:1a:9e:51:13

Thanks in advance!

Best regards,

Ron Klerks

Request Letter/Certificate of Volatility for HPE ProLiant DL360 Gen10 Server

$
0
0

We are using the (Part No. 867963-B21) in our instrumentation suites. In order to bring this material into our secure areas, we need to document the types of memory in the device and use for that memory. This is usually captured in a Letter or Certificate of Volatility (LOV/COV). Can you please reply with the LOV/COV for the server?

I have emailed ISS LOV Requests at isslovrequests@hpe.com already, but I did not recieve a reply.

PLEASE HELP!

Self-help Solutions for Frequent issues – Just a Click Away!

$
0
0
Viewing all 901 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>