-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrency #22
Comments
Hello! Thanks for taking the time to look into this project and provide valuable feedback :) The "solution" I've had to that is to use multiple FastCGI instances (workers):
This prevents blocking operations from slowing down the server to a degree, but obviously you recognise it as just pushing the problem down the chain rather than actually solving it entirely! In a way, this isn't actually too different to how PHP works anyway (there is a maximum number of worker threads and php instances on a server). The difference is that the resource overhead is higher, as without an adaptive process manager (such as apache mod_fastcgi) all the instances you want are running all the time. To actually answer your question, there has been no accommodation for asynchronous processing at the moment. I've toyed with the idea of making the daemon accept a response promise, but this wouldn't be much use without a way to expose the daemon's event loop to the application (otherwise the application would never be able to resolve a promise). To this, I see two solutions:
I've tried to avoid going down the first route (as all of these projects are quite young), but just typing this out has given me an idea. Maybe there's a way to create a ReactPHP/Icicle driver as a separate package - so that users could use async functionality without the project directly coupling to either implementation? I'll be having a look at this over the next few weeks :) The second option is the end goal, and I've been speaking to people in the FIG about this for a while. A few months ago I approached Christopher Pitt - and we have now formed a working group for an event loop PSR. The async-interop organization is the new home to our discussions (in the issues) and development (although it's pretty bare at the moment!). TL/DR Nothing at the moment, but you can just spawn multiple instances. In the next version I'm going to build support for a separate ReactPHP driver package and hopefully one day there will be an industry abstraction for this. Thanks again for your thoughts and time, I'll keep this issue updated with any development :) |
Another thing that you could consider, is exposing some of the inner workings of this application better so I can myself hook it into an event loop. I have a very minimal stream-select based eventloop (hosted here, what I would need from this project in order make this work, and I'm fairly sure it's the same for react and icicle, is:
Just those three things should effectively be enough to turn your synchronous server into an asynchronous one without having to supply adapters just yet or create a hard dependency on one implementation. |
Hi there! The idea behind this project is really great! Unfortunately it will be absolutely useless for serious and high traffic projects without a proper process manager and either a multi-threaded or multi-process model. |
Hi @brstgt, A process manager isn't within scope for this project as I don't see any benefits in one that I would write over a proper process manager such as supervisord (which is what I recommend people use). It's quite easy to have a multi-process model too, just launch more processes and using NGINX to load balance between them. Where this project could benefit is from an easier to access asynchronous core, potentially using the event loop standard from the async-interop group. I'd probably look to make a change of this scale into a new major version, and I'd probably aim to slim down on the dependencies (as mentioned in a different issue) at the same time. I could be convinced otherwise on all of the above, but that's my current position. And those are "the plans". At the moment I'm not working in web development (I'm currently working with embedded software). Thus my free time for open source has been directed away from PHP projects (though that may change). I'll still maintain this project for any bugs or security fixes that are required - but I'll be looking to the community for significant new features (and I'm more than happy to discuss and assist with the development of PRs). Thanks very much for your interest in this project and I hope this is of some assistance! Cheers, Andrew |
Thanks for that fast reply.
supervisord is of course a great piece of software to manage processes, but
it cannot do resource management. Also nginx does not do a proper resource
management. It simply does round robin. A configuration for say 200 worker
processes would be "not so nice" in both supervisord and nginx.
What I would personally require before giving it a try was a kind of
preforked model which not only controls child processes but knows wich
worker is free to assign a request to a free worker not only to any worker..
Of course async operations are a great deal but will not waive that
requirement because there can be CPU bound work like image processing that
cannot be done async and must have parallel processing.
Cheers Ben
2017-01-17 23:19 GMT+01:00 Andrew Carter <[email protected]>:
… Hi @brstgt <https://github.com/brstgt>,
A process manager isn't within scope for this project as I don't see any
benefits in one that I would write over a proper process manager such as
supervisord <http://supervisord.org/> (which is what I recommend people
use). It's quite easy to have a multi-process model too, just launch more
processes and using NGINX to load balance between them.
Where this project could benefit is from an easier to access asynchronous
core, potentially using the event loop standard from the async-interop
group <https://github.com/async-interop/event-loop>. I'd probably look to
make a change of this scale into a new major version, and I'd probably aim
to slim down on the dependencies (as mentioned in a different issue) at the
same time.
I could be convinced otherwise on all of the above, but that's my current
position. And those are "the plans".
At the moment I'm not working in web development (I'm currently working
with embedded software). Thus my free time for open source has been
directed away from PHP projects (though that may change). I'll still
maintain this project for any bugs or security fixes that are required -
but I'll be looking to the community for significant new features (and I'm
more than happy to discuss and assist with the development of PRs).
Thanks very much for your interest in this project and I hope this is of
some assistance!
Cheers,
Andrew
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#22 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAUQZ8EriYODTffvoQkq7XTlvBwXtYiHks5rTT5-gaJpZM4HJMTu>
.
|
Btw.: I could imagine that using pthreads could make this relatively easy. Forking is more complicated and requires IPC. pthreads already supports worker threads that can handle a task queue. So balancing by "least busy worker" would be walk in the park. Only dealing with (shared) resources in threaded environments (especially in PHP) needs some special attention.
Disadvantages
|
Hi, I'm following this very interesting discussion because we'd like to combine our work @prooph with async ideas and ways to keep the read/write models alive between requests. @AndrewCarterUK your "plans" sound good.
I hope you find the way back to PHP ;) @brstgt Do you know appserver.io? I came across the appserver project recently and wondering if it provides the features you listed above. |
Take a look at that one: https://github.com/amphp/aerys We migrated from FPM to Aerys and are super happy! If you have a proper request abstaction, HTTP can definitely be favoured over fastcgi. |
Hi there!
I just did my first FastCGI implementation. It was fairly simple to get it up and running. The big thing I'm thinking about now though is, how is this actually going to improve performance? Given that currently the request-response sequence happens in one synchronous process, this effectively will mean that if a user triggers a request that takes more than a second (due to a slow mysql query), every other user will have to wait till that request has finished.
Is there an internal API that will allow me to get the daemon to parse incoming requests, and send responses back asynchronously? If so, I'll be able to integrate an event loop...
The text was updated successfully, but these errors were encountered: