You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, we have a website that fetches bus_travels from multiple bus operators, for each route there may be multiple bus operators that have to be fetched
right now, I have the condition inside each BusOperatorWorker.
Superworker.define(:BusGatewaySuperWorker, :bus_travel_id, :route_bus_operators_internal_names) do
# mark work as started.
StartWorker :bus_travel_id do
# concurrent
parallel do
BusOperatorNameNumberOneWorker(:bus_travel_id, :route_bus_operators_internal_names)
BusOperatorNameNumberThreeWorker(:bus_travel_id, :route_bus_operators_internal_names)
BusOperatorNameNumberThreeWorker(:bus_travel_id, :route_bus_operators_internal_names)
. . .
BusOperatorNameNumberThirtyWorker(:bus_travel_id, :route_bus_operators_internal_names)
end
EndWorker :bus_travel_id
end
# mark work as done.
end
and this is a worker for one bus operator.
class BusOperatorNameNumberOneWorker
include Sidekiq::Worker
sidekiq_options :queue => :crawler, :retry => false
def perform(bus_travel_id, route_bus_operators_internal_names)
return unless route_bus_operators_internal_names.include?('name_number_one')
# . . . here fetch from bus operator servers if we called the worker for the correct bus operator.
This way, I call perform_async for several bus_operators and it works.
But right now I have 30 workers inside the parallel, each time a search is performed, all the 30 workers are launched and evaluated (at least to see if there is a match of bus operator, if not, return)
So, imagine we have 400 people making searches, thats 30 * 400 workers, my sidekiq is blowing up. We do some cache for the search results, but when we want to clear the cache, sidekiq goes crazy.
this is the use case
In route A, BusOp1,BusOp2 and BusOp3 out of the 30 BusOperators can do the route, we call the superworker with parameters route_bus_operators_internal_names = ['bus_one', 'bus_two', 'bus_three'] and all the 30 Workers are launched in parallel, the first 3 Workers will actually perform the search and the other 27 will just return because they don't match with the operator internal name.
The problem is, even if the worker 4 to 30 are not related to any of the route_bus_operators_internal_names array, they are still being launched to evaluate this.
I want to evaluate this inside the superworker to see which workers perform in parallel without having to launch them all.
The text was updated successfully, but these errors were encountered:
Hi, we have a website that fetches bus_travels from multiple bus operators, for each route there may be multiple bus operators that have to be fetched
right now, I have the condition inside each BusOperatorWorker.
and this is a worker for one bus operator.
This way, I call perform_async for several bus_operators and it works.
But right now I have 30 workers inside the parallel, each time a search is performed, all the 30 workers are launched and evaluated (at least to see if there is a match of bus operator, if not, return)
So, imagine we have 400 people making searches, thats 30 * 400 workers, my sidekiq is blowing up. We do some cache for the search results, but when we want to clear the cache, sidekiq goes crazy.
this is the use case
In route A, BusOp1,BusOp2 and BusOp3 out of the 30 BusOperators can do the route, we call the superworker with parameters route_bus_operators_internal_names = ['bus_one', 'bus_two', 'bus_three'] and all the 30 Workers are launched in parallel, the first 3 Workers will actually perform the search and the other 27 will just return because they don't match with the operator internal name.
The problem is, even if the worker 4 to 30 are not related to any of the route_bus_operators_internal_names array, they are still being launched to evaluate this.
I want to evaluate this inside the superworker to see which workers perform in parallel without having to launch them all.
The text was updated successfully, but these errors were encountered: