Skip to content

Commit

Permalink
Adjust GTP configuration to improve performance
Browse files Browse the repository at this point in the history
- Set the number of search threads to 16.
- Set the number of max batch size to 8.
- Use two neural network server threads for GPU and Neural Engine.
  • Loading branch information
ChinChangYang committed Nov 18, 2023
1 parent b61999f commit a444e21
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions ios/KataGo iOS/Resources/default_gtp.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ maxTimePondering = 60 # Maximum time to ponder, in seconds. Comment out to make
lagBuffer = 1.0

# Number of threads to use in search
numSearchThreads = 2
numSearchThreads = 16

# Play a little faster if the opponent is passing, for friendliness
searchFactorAfterOnePass = 0.50
Expand All @@ -232,7 +232,7 @@ searchFactorWhenWinningThreshold = 0.95
# The default value here is roughly equal to numSearchThreads, but you can specify it manually
# if you are running out of memory, or if you are using multiple GPUs that expect to split
# up the work.
# nnMaxBatchSize = <integer>
nnMaxBatchSize = 8

# Cache up to (2 ** this) many neural net evaluations in case of transpositions in the tree.
# Uncomment and edit to change if you want to adjust a major component of KataGo's RAM usage.
Expand All @@ -251,7 +251,7 @@ searchFactorWhenWinningThreshold = 0.95
# Metal backend runs the default GPU 0.
# CoreML backend runs at another two threads.
# So, if you want to use Metal + CoreML, you should set numNNServerThreadsPerModel to 3.
numNNServerThreadsPerModel = 1
numNNServerThreadsPerModel = 2


# TENSORRT GPU settings--------------------------------------
Expand Down Expand Up @@ -347,8 +347,8 @@ coremlDeviceToUse = 100 # Neural Engine

# IF USING TWO MODEL: Uncomment these two lines
# (AND also set numNNServerThreadsPerModel = 2 above)
# coremlDeviceToUseThread0 = 0 # GPU
# coremlDeviceToUseThread1 = 100 # Neural Engine
coremlDeviceToUseThread0 = 0 # GPU
coremlDeviceToUseThread1 = 100 # Neural Engine

# IF USING THREE MODEL: Uncomment these three lines
# (AND also set numNNServerThreadsPerModel = 3 above)
Expand Down

0 comments on commit a444e21

Please sign in to comment.