Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-17616: Remove KafkaServer #18384

Merged
merged 2 commits into from
Jan 6, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 3 additions & 7 deletions core/src/main/scala/kafka/Kafka.scala
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ package kafka

import java.util.Properties
import joptsimple.OptionParser
import kafka.server.{KafkaConfig, KafkaRaftServer, KafkaServer, Server}
import kafka.server.{KafkaConfig, KafkaRaftServer, Server}
import kafka.utils.Implicits._
import kafka.utils.Logging
import org.apache.kafka.common.utils.{Exit, Java, LoggingSignalHandler, OperatingSystem, Time, Utils}
Expand Down Expand Up @@ -64,11 +64,7 @@ object Kafka extends Logging {
private def buildServer(props: Properties): Server = {
val config = KafkaConfig.fromProps(props, doLog = false)
if (config.requiresZookeeper) {
new KafkaServer(
config,
Time.SYSTEM,
threadNamePrefix = None
)
throw new RuntimeException("ZooKeeper is not supported")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not simply remove this class? It's not a public class and I don't think anything references it besides the relevant shell script.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the KafkaServer is still existent, we can't remove many unused classes/configs from code base. For example, zk configs and ZkMetadataCache. Do we have any use cases for KafkaServer in 4.0?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually meant something else - I was asking whether we need to keep kafka.Kafka. The answer is yes since it is still used for KRaft. But looking at this in more detail, this exception message doesn't make sense in a world where ZK is not supported since it will throw this when process roles is empty.

We should simply remove this conditional code and leave it to KafkaConfig to throw the appropriate exception if processRoles is empty.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for misunderstanding your comment

We should simply remove this conditional code and leave it to KafkaConfig to throw the appropriate exception if processRoles is empty.

make sense

} else {
new KafkaRaftServer(
config,
Expand Down Expand Up @@ -105,7 +101,7 @@ object Kafka extends Logging {
try server.startup()
catch {
case e: Throwable =>
// KafkaServer.startup() calls shutdown() in case of exceptions, so we invoke `exit` to set the status code
// KafkaBroker.startup() calls shutdown() in case of exceptions, so we invoke `exit` to set the status code
fatal("Exiting Kafka due to fatal exception during startup.", e)
Exit.exit(1)
}
Expand Down
4 changes: 2 additions & 2 deletions core/src/main/scala/kafka/server/BrokerServer.scala
Original file line number Diff line number Diff line change
Expand Up @@ -424,15 +424,15 @@ class BrokerServer(
val fetchSessionCacheShards = (0 until NumFetchSessionCacheShards)
.map(shardNum => new FetchSessionCacheShard(
config.maxIncrementalFetchSessionCacheSlots / NumFetchSessionCacheShards,
KafkaServer.MIN_INCREMENTAL_FETCH_SESSION_EVICTION_MS,
KafkaBroker.MIN_INCREMENTAL_FETCH_SESSION_EVICTION_MS,
sessionIdRange,
shardNum
))
val fetchManager = new FetchManager(Time.SYSTEM, new FetchSessionCache(fetchSessionCacheShards))

val shareFetchSessionCache : ShareSessionCache = new ShareSessionCache(
config.shareGroupConfig.shareGroupMaxGroups * config.groupCoordinatorConfig.shareGroupMaxSize,
KafkaServer.MIN_INCREMENTAL_FETCH_SESSION_EVICTION_MS)
KafkaBroker.MIN_INCREMENTAL_FETCH_SESSION_EVICTION_MS)

sharePartitionManager = new SharePartitionManager(
replicaManager,
Expand Down
12 changes: 1 addition & 11 deletions core/src/main/scala/kafka/server/DynamicBrokerConfig.scala
Original file line number Diff line number Diff line change
Expand Up @@ -699,20 +699,11 @@ class DynamicLogConfig(logManager: LogManager, server: KafkaBroker) extends Brok
}

override def reconfigure(oldConfig: KafkaConfig, newConfig: KafkaConfig): Unit = {
val originalLogConfig = logManager.currentDefaultConfig
val originalUncleanLeaderElectionEnable = originalLogConfig.uncleanLeaderElectionEnable
val newBrokerDefaults = new util.HashMap[String, Object](newConfig.extractLogConfigMap)

logManager.reconfigureDefaultLogConfig(new LogConfig(newBrokerDefaults))

updateLogsConfig(newBrokerDefaults.asScala)

if (logManager.currentDefaultConfig.uncleanLeaderElectionEnable && !originalUncleanLeaderElectionEnable) {
server match {
case kafkaServer: KafkaServer => kafkaServer.kafkaController.enableDefaultUncleanLeaderElection()
case _ =>
}
}
}
}

Expand Down Expand Up @@ -1068,8 +1059,7 @@ class DynamicListenerConfig(server: KafkaBroker) extends BrokerReconfigurable wi
listenersToMap(newConfig.effectiveAdvertisedBrokerListeners))) {
verifyListenerRegistrationAlterationSupported()
server match {
case kafkaServer: KafkaServer => kafkaServer.kafkaController.updateBrokerInfo(kafkaServer.createBrokerInfo)
case _ => throw new RuntimeException("Unable to handle non-kafkaServer")
case _ => throw new RuntimeException("Unable to handle reconfigure")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we always throw this exception, we probably don't need to do verifyListenerRegistrationAlterationSupported before that and we may be able to delete that method if it's not used anywhere else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we probably should include more context: it looks like we don't allow dynamic reconfiguration of listener registrations.

@cmccabe @jsancio Is this documented as part of the zk to kraft migration?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remove SocketServerConfigs.ADVERTISED_LISTENERS_CONFIG from ReconfigurableConfigs as well.

SocketServerConfigs.ADVERTISED_LISTENERS_CONFIG,

That can disallow users to configure advertised listeners dynamically.

Copy link
Contributor

@m1a2st m1a2st Jan 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I open a Jira to check all dynamic config which can't dynamic configure in Kraft, and only find ADVERTISED_LISTENERS_CONFIG can't, If we addressed it on this PR, I will close Jira and #18390

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a bunch of other ZooKeeper related logic in this class so I was planning to address them together in a follow up issue. Deleting KafkaServer enables us to start removing other classes that depend on this one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a bunch of other ZooKeeper related logic in this class so I was planning to address them together in a follow up issue. Deleting KafkaServer enables us to start removing other classes that depend on this one.

I'm ok to merge this and then address comments in the follow-up, because there are many cleanup blocked by KafkaServer

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with that, but let's file a JIRA so we don't lose track.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I care particularly about the error messages and such things since they are not easy to find afterwards.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened https://issues.apache.org/jira/browse/KAFKA-18405 with details about this issue.

}
}
}
Expand Down
2 changes: 2 additions & 0 deletions core/src/main/scala/kafka/server/KafkaBroker.scala
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,8 @@ object KafkaBroker {
* you do change it, be sure to make it match that regex or the system tests will fail.
*/
val STARTED_MESSAGE = "Kafka Server started"

val MIN_INCREMENTAL_FETCH_SESSION_EVICTION_MS: Long = 120000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit odd that we're not following the Scala convention here. Since we're rewriting this stuff in Java anyway, maybe it's ok.

}

trait KafkaBroker extends Logging {
Expand Down
Loading
Loading