Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support customized JMX monitoring through the Factory Pattern. #2932

Merged
merged 25 commits into from
Jan 5, 2025

Conversation

doveLin0818
Copy link
Contributor

What's changed?

Original Kafka monitoring indicators:
image
image

Now Kafka monitoring indicators:
image
image
image

  1. developed a generalized factory pattern template for customized JMX, making it easy for other components to integrate.
  2. added new customized metrics for Kafka and enhanced its documentation
  3. The newly added customization strategy does not affect the user's custom templates, nor does it impact the use of the original JMX generic templates.

Checklist

  • [✅] I have read the Contributing Guide
  • [✅] I have written the necessary doc or comment.
  • I have added the necessary unit tests and all cases have passed.

Add or update API

  • I have added the necessary e2e tests and all cases have passed.

@tomsun28 tomsun28 added this to the 1.7.0 milestone Jan 1, 2025
@doveLin0818
Copy link
Contributor Author

@zhangshenghang hi, you were right before, the way MBeans are accessed can vary between different Kafka versions. throuth this customization change that can accommodate these differences, but using the generic template could make this challenging.
image

@zhangshenghang
Copy link
Member

thanks , @doveLin0818

If users are used to modify the script, I think it is very cumbersome.

Can we add a configuration option for different versions to select our monitoring version when creating a link?
For example:
For versions below 1.0, use script A.
For versions between 1.0 and 2.0, use script B.
For versions between 2.0 and 3.0, use script C.
For versions 3.0 and above, use script D.

Of course, this is just an example. Is there a better implementation plan according to the actual situation?

@tomsun28 Do you have any suggestions?

@doveLin0818
Copy link
Contributor Author

thanks , @doveLin0818

If users are used to modify the script, I think it is very cumbersome.

Can we add a configuration option for different versions to select our monitoring version when creating a link? For example: For versions below 1.0, use script A. For versions between 1.0 and 2.0, use script B. For versions between 2.0 and 3.0, use script C. For versions 3.0 and above, use script D.

Of course, this is just an example. Is there a better implementation plan according to the actual situation?

@tomsun28 Do you have any suggestions?

hi @zhangshenghang @tomsun28 ,Indeed, it is unfriendly to let users modify the scripts themselves. So this precisely reflects the advantage of this update iteration in scalability.
Looking at the official documentation of Kafka, there are dozens of versions. If maintain a set of scripts for each version, I think it is a complicated process,and poor maintainability. we can see https://kafka.apache.org/39/documentation.html
image
in fact, we can pre-process the objectName uniformly in the custom JMX template and use the wildcard * to match different versions of mbean,
image
and then, users can enter kafka.server:type=GroupMetadataManager or kafka.coordinator.group:type=GroupMetadataManager at will, because our backend uses "GroupMetadataManager" for verification.
show example:
my kafka version: kafka.coordinator.group:type=GroupMetadataManager
image
and now i use kafka.server:type=GroupMetadataManager:which can Indicators can be collected normally
image
image
By preprocessing objectName uniformly, we can be compatible with different versions of Kafka, because GroupMetadataManager is fixed.This is just an example of dealing with different versions of Kafka. I think this is an advantage of customizing JMX at present.

@tomsun28
Copy link
Contributor

tomsun28 commented Jan 2, 2025

hi @zhangshenghang @tomsun28 ,Indeed, it is unfriendly to let users modify the scripts themselves. So this precisely reflects the advantage of this update iteration in scalability. Looking at the official documentation of Kafka, there are dozens of versions. If maintain a set of scripts for each version, I think it is a complicated process,and poor maintainability. we can see https://kafka.apache.org/39/documentation.html image in fact, we can pre-process the objectName uniformly in the custom JMX template and use the wildcard * to match different versions of mbean, image and then, users can enter kafka.server:type=GroupMetadataManager or kafka.coordinator.group:type=GroupMetadataManager at will, because our backend uses "GroupMetadataManager" for verification. show example: my kafka version: kafka.coordinator.group:type=GroupMetadataManager image and now i use kafka.server:type=GroupMetadataManager:which can Indicators can be collected normally image image By preprocessing objectName uniformly, we can be compatible with different versions of Kafka, because GroupMetadataManager is fixed.This is just an example of dealing with different versions of Kafka. I think this is an advantage of customizing JMX at present.

Hi 👍+1,I think this is a good solution for different versions of Kafka. For others app type different version, how about consider use the different template yml. We will build a template market later, and users can find the required version yml from the public market to download and use. The system defaults provide a common version yml.

@zhangshenghang
Copy link
Member

hi @zhangshenghang @tomsun28 ,Indeed, it is unfriendly to let users modify the scripts themselves. So this precisely reflects the advantage of this update iteration in scalability. Looking at the official documentation of Kafka, there are dozens of versions. If maintain a set of scripts for each version, I think it is a complicated process,and poor maintainability. we can see https://kafka.apache.org/39/documentation.html image in fact, we can pre-process the objectName uniformly in the custom JMX template and use the wildcard * to match different versions of mbean, image and then, users can enter kafka.server:type=GroupMetadataManager or kafka.coordinator.group:type=GroupMetadataManager at will, because our backend uses "GroupMetadataManager" for verification. show example: my kafka version: kafka.coordinator.group:type=GroupMetadataManager image and now i use kafka.server:type=GroupMetadataManager:which can Indicators can be collected normally image image By preprocessing objectName uniformly, we can be compatible with different versions of Kafka, because GroupMetadataManager is fixed.This is just an example of dealing with different versions of Kafka. I think this is an advantage of customizing JMX at present.hi @zhangshenghang @tomsun28 ,Indeed, it is unfriendly to let users modify the scripts themselves. So this precisely reflects the advantage of this update iteration in scalability. Looking at the official documentation of Kafka, there are dozens of versions. If maintain a set of scripts for each version, I think it is a complicated process,and poor maintainability. we can see https://kafka.apache.org/39/documentation.html image in fact, we can pre-process the objectName uniformly in the custom JMX template and use the wildcard * to match different versions of mbean, image and then, users can enter kafka.server:type=GroupMetadataManager or kafka.coordinator.group:type=GroupMetadataManager at will, because our backend uses "GroupMetadataManager" for verification. show example: my kafka version: kafka.coordinator.group:type=GroupMetadataManager image and now i use kafka.server:type=GroupMetadataManager:which can Indicators can be collected normally image image By preprocessing objectName uniformly, we can be compatible with different versions of Kafka, because GroupMetadataManager is fixed.This is just an example of dealing with different versions of Kafka. I think this is an advantage of customizing JMX at present.嗨 @zhangshenghang @tomsun28 ,确实让用户自己修改脚本是不友好的。所以这恰恰反映了这次更新迭代在可扩展性方面的优势。看 Kafka 的官方文档,有几十个版本。如果为每个版本维护一组脚本,我认为这是一个复杂的过程,而且可维护性很差。我们可以看到 https://kafka.apache.org/39/documentation.html 实际上,我们可以在自定义的 JMX 模板中统一对 objectName 进行预处理,并使用通配符 * 来匹配不同版本的 mbean,然后,用户可以随意输入 kafka.server:type=GroupMetadataManager 或 kafka.coordinator.group:type=GroupMetadataManager,因为我们的后端使用了 “GroupMetadataManager” 进行验证。示例:我的 Kafka 版本:kafka.coordinator.group:type=GroupMetadataManager 现在我用 kafka.server:type=GroupMetadataManager:这样就可以正常采集指标了 通过对 objectName 进行统一预处理,我们可以兼容不同版本的 Kafka,因为 GroupMetadataManager 是固定的。这只是处理不同版本的 Kafka 的示例。我认为这是目前自定义 JMX 的一个优势。

Hi 👍+1,I think this is a good solution for different versions of Kafka. For others app type different version, how about consider use the different template yml. We will build a template market later, and users can find the required version yml from the public market to download and use. The system defaults provide a common version yml.Hi 👍+1,I think this is a good solution for different versions of Kafka. For others app type different version, how about consider use the different template yml. We will build a template market later, and users can find the required version yml from the public market to download and use. The system defaults provide a common version yml.👍 嗨 +1,我认为这是不同版本的 Kafka 的一个很好的解决方案。对于其他 app 类型不同版本,考虑使用不同的模板 yml 怎么样。我们后续会搭建一个模板市场,用户可以在公众市场找到需要的 yml 版本下载使用。系统默认值提供通用版本 yml。

hi @zhangshenghang @tomsun28 ,Indeed, it is unfriendly to let users modify the scripts themselves. So this precisely reflects the advantage of this update iteration in scalability. Looking at the official documentation of Kafka, there are dozens of versions. If maintain a set of scripts for each version, I think it is a complicated process,and poor maintainability. we can see https://kafka.apache.org/39/documentation.html image in fact, we can pre-process the objectName uniformly in the custom JMX template and use the wildcard * to match different versions of mbean, image and then, users can enter kafka.server:type=GroupMetadataManager or kafka.coordinator.group:type=GroupMetadataManager at will, because our backend uses "GroupMetadataManager" for verification. show example: my kafka version: kafka.coordinator.group:type=GroupMetadataManager image and now i use kafka.server:type=GroupMetadataManager:which can Indicators can be collected normally image image By preprocessing objectName uniformly, we can be compatible with different versions of Kafka, because GroupMetadataManager is fixed.This is just an example of dealing with different versions of Kafka. I think this is an advantage of customizing JMX at present.

Hi 👍+1,I think this is a good solution for different versions of Kafka. For others app type different version, how about consider use the different template yml. We will build a template market later, and users can find the required version yml from the public market to download and use. The system defaults provide a common version yml.

+1 👍 ,I will review the code.

@zhangshenghang
Copy link
Member

@doveLin0818 Hi, After making the changes above, remember to click resolve

@doveLin0818
Copy link
Contributor Author

@doveLin0818 Hi, After making the changes above, remember to click resolve

Hello @zhangshenghang ,Happy new year,the code has been written successfully, but I don't seem to see the comment of code review, and I can't find the entry of 'resolve'

@zhangshenghang zhangshenghang merged commit 00076fc into apache:master Jan 5, 2025
3 checks passed
@doveLin0818 doveLin0818 deleted the feature_JmxCustomization branch January 5, 2025 10:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

4 participants