ESXi shell控制虚拟机开关机

To power on a virtual machine from the command line:List the inventory ID of the virtual machine with the command:

vim-cmd vmsvc/getallvms |grep <vm name>

Note: The first column of the output shows the vmid.Check the power state of the virtual machine with the command:

vim-cmd vmsvc/power.getstate <vmid>

Power-on the virtual machine with the command:

vim-cmd vmsvc/power.on <vmid>

关机就是把 power.on 改为 power.off 就可以了.参考: Powering on a virtual machine from the command line when the host cannot be managed using vSphere Client另外还有一种方法, 只能控制关机:Get a list of running virtual machines, identified by World ID, UUID, Display Name, and path to the .vmx configuration file, using this command:

esxcli vm process list # 只显示开机的虚拟机列表

Power off one of the virtual machines from the list using this command:

esxcli vm process kill --type=[soft,hard,force]--world-id=WorldNumber

Notes: Three power-off methods are available. Soft is the most graceful, hard performs an immediate shutdown, and force should be used as a last resort.Alternate power off command syntax is:esxcli vm process kill -[soft,hard,force]-w WorldNumber
Posted in OS, vSphere | Tagged , | Leave a comment

ESXi使用ssh-key登陆

通过 ssh 登录到 ESXi 系统后, 可以通过 passwd root 来修改密码参考 Changing ESXi Root Password , 里面还说明了如何开启 ESXi ssh 登录.For ESXi 5.0, the location of authorized_keys is: /etc/ssh/keys-<username>/authorized_keys
Posted in AutomaticOPS, OS, SSH/PSSH, vSphere | Tagged , | Leave a comment

Glusterfs 排错

  1. 出现诸如rpc服务不可用导致glusterfs不可使用 报错的情况
Dec  9 10:36:32 node3 systemd: Cannot add dependency job for unit loopback_gluster.service, ignoring: Unit not found.
Dec  9 10:36:32 node3 systemd: rpcbind.socket failed to listen on sockets: Address family not supported by protocol
Dec  9 10:36:32 node3 systemd: Failed to listen on RPCbind Server Activation Socket.
Dec  9 10:36:32 node3 systemd: Dependency failed for RPC bind service.
Dec  9 10:36:32 node3 systemd: Dependency failed for GlusterFS, a clustered file-system server.
Dec  9 10:36:32 node3 systemd: Job glusterd.service/start failed with result 'dependency'.
Dec  9 10:36:32 node3 systemd: Job rpcbind.service/start failed with result 'dependency'.
Dec  9 10:38:01 node3 systemd: Cannot add dependency job for unit loopback_gluster.service, ignoring: Unit not found.
Dec  9 10:38:01 node3 systemd: rpcbind.socket failed to listen on sockets: Address family not supported by protocol
Dec  9 10:38:01 node3 systemd: Failed to listen on RPCbind Server Activation Socket.
Dec  9 10:38:01 node3 systemd: Dependency failed for RPC bind service.
Dec  9 10:38:01 node3 systemd: Job rpcbind.service/start failed with result 'dependency'.
Continue reading
Posted in Glusterfs, Servuce | Tagged , , | Leave a comment

GlusterFS与Kubernetes的适配

  1. 安装epel源
pssh -l auto -h /opt/node.list -i 'sudo yum install -y epel-release*'
  1. 安装heketi
Continue reading
Posted in Glusterfs, Kubernetes | Tagged , , , | Leave a comment

glusterfs 的缩容、扩容和替换(转载)

Source: https://www.cnblogs.com/bfmq/p/9990467.html

  • 1. 扩容操作
1 [root@g1 ~]# gluster peer probe g3                # 将新节点添加到集群里,如果是原本集群内的机器操作则省略
2 peer probe: success. Host g3 port 24007 already in peer list        # 这台机器添加过了
3 [root@g1 ~]# gluster volume info test                             # 此时该卷块设备为4个
4 5 Volume Name: test
6 Type: Distributed-Replicate
7 Volume ID: 92ffe586-ea14-4b7b-9b89-5dfd626cb6d4
8 Status: Started
9 Snapshot Count: 0
10 Number of Bricks: 2 x 2 = 4
11 Transport-type: tcp
12 Bricks:
13 Brick1: g1:/glusterfs/data1
14 Brick2: g2:/glusterfs/data1
15 Brick3: g3:/glusterfs/data1
16 Brick4: g1:/glusterfs/data2
17 [root@g1 ~]# gluster volume add-brick test g2:/glusterfs/data2 g3:/glusterfs/data2 g1:/glusterfs/data3 g2:/glusterfs/data3 g3:/glusterfs/data3
18 volume add-brick: failed: Incorrect number of bricks supplied 5 with count 2        # 很明显又是之前的块设备与复制数备份问题。因此注意服务器上的磁盘数量要与卷复制数匹配问题,比如一个复制数为3的卷,买了10块盘其中一块是加不进来的
19 [root@g1 ~]# gluster volume add-brick test g2:/glusterfs/data2 g3:/glusterfs/data2 g1:/glusterfs/data3 g2:/glusterfs/data3
20 volume add-brick: failed: The brick g1:/glusterfs/data3 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.        # 我还是用的/所以要强制
21 [root@g1 ~]# gluster volume add-brick test g2:/glusterfs/data2 g3:/glusterfs/data2 g1:/glusterfs/data3 g2:/glusterfs/data3 force
22 volume add-brick: success
23 [root@g1 ~]# gluster volume info test                 # 卷的块设备变多了
24 25 Volume Name: test
26 Type: Distributed-Replicate
27 Volume ID: 92ffe586-ea14-4b7b-9b89-5dfd626cb6d4
28 Status: Started
29 Snapshot Count: 0
30 Number of Bricks: 4 x 2 = 8
31 Transport-type: tcp
32 Bricks:
33 Brick1: g1:/glusterfs/data1
34 Brick2: g2:/glusterfs/data1
35 Brick3: g3:/glusterfs/data1
36 Brick4: g1:/glusterfs/data2
37 Brick5: g2:/glusterfs/data2
38 Brick6: g3:/glusterfs/data2
39 Brick7: g1:/glusterfs/data3
40 Brick8: g2:/glusterfs/data3
41 [root@g1 ~]# gluster volume rebalance test start                # 让以前的数据再次均匀分布
42 volume rebalance: test: success: Rebalance on test has been started successfully. Use rebalance status command to check status of the rebalance process.
43 ID: a2f4b603-283a-4303-8ad0-84db00adb5a5
44 [root@g1 ~]# gluster volume rebalance test status            # 查看任务状态,要均衡文件较大时需要一段时间
45                                     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
46                                ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------47                                localhost                2        0Bytes            10             0             0            completed        0:00:00
48                                       g2                1        0Bytes             9             0             0            completed        0:00:00
49                                       g3                3        0Bytes             6             0             0            completed        0:00:00
50 volume rebalance: test: success
51 [root@g1 ~]# gluster volume rebalance test stop                # 等所有状态completed就可以停了
52                                     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
53                                ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------54                                localhost                2        0Bytes            10             0             0            completed        0:00:00
55                                       g2                1        0Bytes             9             0             0            completed        0:00:00
56                                       g3                3        0Bytes             6             0             0            completed        0:00:00
57 volume rebalance: test: success: rebalance process may be in the middle of a file migration.
58 The process will be fully stopped once the migration of the file is complete.
59 Please check rebalance process for completion before doing any further brick related tasks on the volume.
6061 [root@g1 ~]# gluster volume rebalance test status        # 现在该卷上已经没有在均衡的任务了
62 volume rebalance: test: failed: Rebalance not started for volume test.
Continue reading
Posted in Glusterfs, Servuce | Tagged , | Leave a comment