You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The current list_models() API only supports a limit parameter without true pagination support. This makes it impossible to systematically discover models beyond the initial limit. For example, when fetching most downloaded models:
This will always return the same top 100 models unless a new model has overtaken the top 100 model downloads. There's no way to get models 101-200, unless you change the limit to 200 and grab the 1-200 records. This is problematic for services that need to:
Discover new models systematically
Process models in smaller batches
Index or monitor the full model ecosystem
Describe the solution you'd like
Add proper pagination support to the API by either:
Adding an offset parameter:
pythonmodels=hf.list_models(
filter="text-generation",
sort="downloads",
limit=100,
offset=100# Get next 100 models
)
Or exposing the internal cursor-based pagination that's already used by paginate():
pythonresponse=hf.list_models(
filter="text-generation",
limit=100,
cursor="next_page_token"# From previous response
)
next_cursor=response.next_cursor
Describe alternatives you've considered
Current workarounds we've tried:
Fetching very large batches (1000+ models) and filtering locally
Using different sort criteria to try to get different models
Using the search parameter with different queries
None of these provide a reliable way to systematically discover all models and ensure we are getting different models with each call.
Additional context
Looking at the source code, the API already uses internal pagination via paginate():
Is your feature request related to a problem? Please describe.
The current
list_models()
API only supports alimit
parameter without true pagination support. This makes it impossible to systematically discover models beyond the initial limit. For example, when fetching most downloaded models:This will always return the same top 100 models unless a new model has overtaken the top 100 model downloads. There's no way to get models 101-200, unless you change the limit to 200 and grab the 1-200 records. This is problematic for services that need to:
Describe the solution you'd like
Add proper pagination support to the API by either:
offset
parameter:paginate()
:Describe alternatives you've considered
Current workarounds we've tried:
None of these provide a reliable way to systematically discover all models and ensure we are getting different models with each call.
Additional context
Looking at the source code, the API already uses internal pagination via
paginate()
:Exposing this functionality would align with common API practices and enable better tooling around the Hub's model ecosystem.
The text was updated successfully, but these errors were encountered: