如何获取namenode 50070 JMX指标信息的详细数据?

2026-04-12 16:421阅读0评论SEO教程
  • 内容介绍
  • 文章标签
  • 相关推荐

本文共计583个文字,预计阅读时间需要3分钟。

如何获取namenode 50070 JMX指标信息的详细数据?

在生产实践过程中,需在data卸载后停机下线。在线前需确认机器是否已下线完成。可前往NameNode的50070端口查看节点效率。为快速获取节点信息,编写了简单脚本。


在生产实践过程中,需要把data退役之后需要停机下线,在下线之前需要确认机器是否已下线完成,要去namenode的50070界面上查看显然效率低,为了能够快速拿到节点信息,写了简单的脚本。jmx/50070还有很多信息可以获取,可以需求采集需要的指标,可以转成Prometheus的export,或是入到时序数据库。本文只是用于交流和学习。

# -*- coding: utf-8 -*- __author__ = 'machine' #date: 20220720 import json import requests url_dict = { '集群1': '192.168.100.1:50070', '集群2': '192.168.14.1:50070' } for k,v in url_dict.items(): print(" ") print("-----------------------------------------------------------------------------") print("集群名称:",k) #print(v) url=v+str('/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo') print(url) req = requests.get(url) #print(req) result_json = json.loads(req.text) #print(result_json) livenode=json.loads(result_json['beans'][0]['LiveNodes']) deadnode=result_json['beans'][0]['DeadNodes'] print("运行节点的服务状态: ") list_inservernode = [] list_decommissioned = [] for lip in livenode.values(): #print(lip['xferaddr'].split(':')[0]) status=lip['adminState'].split(' ')[0] if status == 'Decommissioned': list_decommissioned.append(lip['xferaddr'].split(':')[0]) #print("退役节点",lip['xferaddr'].split(':')[0]) else: list_inservernode.append(lip['xferaddr'].split(':')[0]) #print("在线节点",lip['xferaddr'].split(':')[0]) print(" ") print("退役节点") for i in list_decommissioned: print(i) print("在线节点") for i in list_inservernode: print(i) print(" ") #print("-----------------------------------------------------------------------------") print(str('----------------------------- ') + "HDFS空间使用情况" + str(' -----------------------------')) print("HDFS总共空间(TB):",result_json['beans'][0]['Total'] // (1024 * 1024 * 1024 * 1024) ,str('TB')) print("HDFS已使用空间(TB):", result_json['beans'][0]['Used'] // (1024 * 1024 * 1024 * 1024), str('TB')) print("HDFS剩余空间(TB):", result_json['beans'][0]['Free'] // (1024 * 1024 * 1024 * 1024), str('TB')) print("HDFS已使用空间(使用率)",result_json['beans'][0]['PercentUsed'],str('%')) print("-----------------------------------------------------------------------------")

jmx hadoop部分参数

curl 192.168.10.2:50070/jmx?

NameNode:50070

qry=Hadoop:service=NameNode,name=RpcActivityForPort8020
MemHeapMaxM
MemMaxM

Hadoop:service=NameNode,name=JvmMetrics

MemHeapMaxM
MemMaxM
Hadoop:service=NameNode,name=FSNamesystem
CapacityTotal
CapacityTotalGB
CapacityRemaining
CapacityRemainingGB
TotalLoad
FilesTotal

Hadoop:service=NameNode,name=FSNamesystemState

NumLiveDataNodes

Hadoop:service=NameNode,name=NameNodeInfo

LiveNodes
java.lang:type=Runtime
StartTime

Hadoop:service=NameNode,name=FSNamesystemState

TopUserOpCounts:timestamp

Hadoop:service=NameNode,name=NameNodeActivity

CreateFileOps
FilesCreated
FilesAppended
FilesRenamed
GetListingOps
DeleteFileOps
FilesDeleted

Hadoop:service=NameNode,name=FSNamesystem

CapacityTotal
CapacityTotalGB
CapacityUsed
CapacityUsedGB
CapacityRemaining
CapacityRemainingGB
CapacityUsedNonDFS

DataNode

DataNode:50075

Hadoop:service=DataNode,name=DataNodeActivity-slave-50010

BytesWritten
BytesRead
BlocksWritten
BlocksRead
ReadsFromLocalClient
ReadsFromRemoteClient
WritesFromLocalClient
WritesFromRemoteClient
BlocksGetLocalPathInfo


如何获取namenode 50070 JMX指标信息的详细数据?
标签:指标信息

本文共计583个文字,预计阅读时间需要3分钟。

如何获取namenode 50070 JMX指标信息的详细数据?

在生产实践过程中,需在data卸载后停机下线。在线前需确认机器是否已下线完成。可前往NameNode的50070端口查看节点效率。为快速获取节点信息,编写了简单脚本。


在生产实践过程中,需要把data退役之后需要停机下线,在下线之前需要确认机器是否已下线完成,要去namenode的50070界面上查看显然效率低,为了能够快速拿到节点信息,写了简单的脚本。jmx/50070还有很多信息可以获取,可以需求采集需要的指标,可以转成Prometheus的export,或是入到时序数据库。本文只是用于交流和学习。

# -*- coding: utf-8 -*- __author__ = 'machine' #date: 20220720 import json import requests url_dict = { '集群1': '192.168.100.1:50070', '集群2': '192.168.14.1:50070' } for k,v in url_dict.items(): print(" ") print("-----------------------------------------------------------------------------") print("集群名称:",k) #print(v) url=v+str('/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo') print(url) req = requests.get(url) #print(req) result_json = json.loads(req.text) #print(result_json) livenode=json.loads(result_json['beans'][0]['LiveNodes']) deadnode=result_json['beans'][0]['DeadNodes'] print("运行节点的服务状态: ") list_inservernode = [] list_decommissioned = [] for lip in livenode.values(): #print(lip['xferaddr'].split(':')[0]) status=lip['adminState'].split(' ')[0] if status == 'Decommissioned': list_decommissioned.append(lip['xferaddr'].split(':')[0]) #print("退役节点",lip['xferaddr'].split(':')[0]) else: list_inservernode.append(lip['xferaddr'].split(':')[0]) #print("在线节点",lip['xferaddr'].split(':')[0]) print(" ") print("退役节点") for i in list_decommissioned: print(i) print("在线节点") for i in list_inservernode: print(i) print(" ") #print("-----------------------------------------------------------------------------") print(str('----------------------------- ') + "HDFS空间使用情况" + str(' -----------------------------')) print("HDFS总共空间(TB):",result_json['beans'][0]['Total'] // (1024 * 1024 * 1024 * 1024) ,str('TB')) print("HDFS已使用空间(TB):", result_json['beans'][0]['Used'] // (1024 * 1024 * 1024 * 1024), str('TB')) print("HDFS剩余空间(TB):", result_json['beans'][0]['Free'] // (1024 * 1024 * 1024 * 1024), str('TB')) print("HDFS已使用空间(使用率)",result_json['beans'][0]['PercentUsed'],str('%')) print("-----------------------------------------------------------------------------")

jmx hadoop部分参数

curl 192.168.10.2:50070/jmx?

NameNode:50070

qry=Hadoop:service=NameNode,name=RpcActivityForPort8020
MemHeapMaxM
MemMaxM

Hadoop:service=NameNode,name=JvmMetrics

MemHeapMaxM
MemMaxM
Hadoop:service=NameNode,name=FSNamesystem
CapacityTotal
CapacityTotalGB
CapacityRemaining
CapacityRemainingGB
TotalLoad
FilesTotal

Hadoop:service=NameNode,name=FSNamesystemState

NumLiveDataNodes

Hadoop:service=NameNode,name=NameNodeInfo

LiveNodes
java.lang:type=Runtime
StartTime

Hadoop:service=NameNode,name=FSNamesystemState

TopUserOpCounts:timestamp

Hadoop:service=NameNode,name=NameNodeActivity

CreateFileOps
FilesCreated
FilesAppended
FilesRenamed
GetListingOps
DeleteFileOps
FilesDeleted

Hadoop:service=NameNode,name=FSNamesystem

CapacityTotal
CapacityTotalGB
CapacityUsed
CapacityUsedGB
CapacityRemaining
CapacityRemainingGB
CapacityUsedNonDFS

DataNode

DataNode:50075

Hadoop:service=DataNode,name=DataNodeActivity-slave-50010

BytesWritten
BytesRead
BlocksWritten
BlocksRead
ReadsFromLocalClient
ReadsFromRemoteClient
WritesFromLocalClient
WritesFromRemoteClient
BlocksGetLocalPathInfo


如何获取namenode 50070 JMX指标信息的详细数据?
标签:指标信息