Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build FSM runner #1

Open
gauravgardi opened this issue Jan 15, 2017 · 26 comments
Open

Build FSM runner #1

gauravgardi opened this issue Jan 15, 2017 · 26 comments
Assignees

Comments

@gauravgardi
Copy link
Member

Look at smach - http://wiki.ros.org/smach for this.

@sam17
Copy link
Member

sam17 commented Jan 17, 2017

This seems really perfect for us. Someone should write a python file for our architecture. @ManashRaja Can some software guy be assigned to this?

@gauravgardi
Copy link
Member Author

@ash-anand @amitpathak09 check the link here for help regarding implementation of FSM

@sam17
Copy link
Member

sam17 commented Apr 11, 2017

@ash-anand Copy paste all the discuss and the wiki and diagram here.

@ash-anand
Copy link
Member

Snippets from conversation on Slack.
@sam17 :

  1. Why is identifying the bot to attack a different state? Why cant we do it in parallel all the time?

  2. I think your states are too broad. Each state can have multiple states within it. Though, that depends on other modules.

  3. Find/Scan state may need to have multiple states depending on localisation success or failure.

  4. We might need an additional state for increasing K. Coz, it is not necessary that just while scanning K will increase. Some action might be needed to increase K

  5. You claim to have a state with K high for multiple bots. That is really really rare.

@ash-anand
Copy link
Member

1.Identifying bot is different state as there are currently 2 ways we are going to it 1. Simple Circle detection 2. YOLO learning based. We don't have enough computational power to run YOLO all the time so that state will also decide which one to use based on fps or some other criteria.
2.States describe what that state should do, how that will be done is decided by the modules.
3. Find/Scan state will have more modes under it. Will think it through.
4.K will decrease/increase using Kalman filter. Longer the bot is out of view lesser it's probability will get with a Gaussian distribution.
5. We can put a threshold where say if 10/20 bots have prob greater than K instead of all bots.

@ash-anand
Copy link
Member

@sam17 :

  1. Doesnt YOLO/circle detection need to run all the time anyway? So why is it a state with transition? What is YOLO? Is there a description of it? I recently read pi can detect DL things in <1s.
  2. True
  3. Yes but we need something to actively increase K no? Who handles that?
  4. 10/20 bots? Very very improbable.

@ash-anand
Copy link
Member

  1. I misunderstood. YOLO can't be run all the time, it's very computationally heavy. Pi cams detecting things don't work on ML. They do some template matching , bots won't be detected if the angles or size change.
  2. K will increase when quad sees the bot. Location will be stored as Normal distribution with increasing variance ( 1/K ), if the quad sees it again we will reduce the variance of the distribution as the harmonic mean thus increasing K.
  3. We can have some other probable criteria then

@sam17
Copy link
Member

sam17 commented Apr 15, 2017

  1. Why not? Pi does not need to do IP. It only needs to stream images and get results back but yea network needs to be fast for it.
  2. But what about following one bot? Shouldn't we have the provision to shift to greedy mechanism in this?

@ash-anand
Copy link
Member

  1. We can do that but last year off board computation didn't go very well I guess. I was thinking of getting nVidia Jetson development board which hopefully should support YOLO running every time.

  2. That again comes down to specifications of handling the event. Also in case the bot has high probability and is feasible to attack, quad will follow it anyways. We can add greedy algo as well but greedy algos have their down sides of not converging to the optimal always.

@gauravgardi
Copy link
Member Author

@ash-anand I did not understand that why identifying the bot state will decide if YOLO is to be used or not? That state is just supposed to point out which bot has to be attacked based on MAV's and ground bot's pose and twist right?

@ash-anand
Copy link
Member

ash-anand commented Apr 16, 2017

@gauravgardi to know anything about the groundbots and point out the bots to take action on, first we need to detect them and get details about it's pose twist etc as you said.
For that we need code ( Circle detection like we have now or YOLO ). Since YOLO is computationally very expensive we can't just let it run in a separate thread and continuously publish data to a node.
This can be handled in two ways

  • Get a more powerful development board which supports CUDA with multi core processor.
  • Run YOLO only at times when there is no visible bots using circle detection code.

Assuming we go with the 2nd option , it should be the job of Find State to do the switching when required.

@gauravgardi
Copy link
Member Author

@ash-anand so it should be the job of find/scan state or if required a separate state and not the job of identify bot state. So identify bot state is only identifying which bot to attack based on MAV's and ground bots' pose and twist. So why is it a separate state and not a part of strategy state?

@ash-anand
Copy link
Member

@gauravgardi identifying bots shouldn't be done when quad is going to attack. It should be done before. Identify bot is not a state. Identifying bots comes under Find/ Scan state only.
Bot prediction is totally different state which is responsible for choosing the best bot to attack.
I made it separate from strategy because of the discontinuity between these two tasks.
Bot prediction can be very expensive on the processor if done in depth. Also if you consider the interrupts received by the Object Detection , you can see the advantage of keeping the Bot prediction state separate from the Strategy. Strategy is already a very complicated state with a lot of physical activity in Quad.
As written in Wiki, states are "re-run" once an interrupt is received.
Doing the bot prediction again and again when we can see the ground bot in front of us will be waste of computational resources as well as time.

@gauravgardi
Copy link
Member Author

I was referring to this state
image

@ash-anand
Copy link
Member

As I wrote in later reply I misunderstood. Also running bot prediction all the time with YOLO on it won't be very good on Pi.

@gauravgardi
Copy link
Member Author

gauravgardi commented Apr 16, 2017

Ok I understood and will it be feasible on odroid?

@ash-anand
Copy link
Member

YOLO + bot detection upto 20+ layers together is capable of making my laptop crash.
It has happened once or twice before so I don't see any chance of even odroid standing.

@gauravgardi
Copy link
Member Author

That would happen while training right? can't we use the parameters after training on any other machine and then run YOLO on odroid?

@ash-anand
Copy link
Member

Not during training, it happened during running the code for detection.
We won't be running it for 20 layers but minimum 2 layers is required for accurate predictions.
Assuming my laptop to be atleast 10 times faster than odroid because of CUDA cores, I don't see any chance of it running.

@sam17
Copy link
Member

sam17 commented Apr 18, 2017

Can I get link to YOLO code and its plan?
Also, we should never ever shift detection strategy mid-flight. Things need to be completed deterministic in-flight.

@ash-anand
Copy link
Member

@sam17 pjreddie.com/darknet/yolo/
Then we will need to figure out a way to make it less expensive to run smoothly

@amanchandra333
Copy link

@ash-anand we don't necessarily need to re-run the states once the interrupt is received. we can just publish commands on a topic to stop a state's output while it keep running in the background. also, let me know whether we can run the bot detection and tracking state at all times in any case and if we can keep detecting the obstacles at all times as sharp doesn't seem to be much useful.
btw, we can run a script in smach to view the designed state machine directly...so you don't need to draw the diagrams and everything, I'll be pushing the code shortly after the exams.
@gauravgardi I seem to be unable to assign the issue to myself. Do it please.

@sam17
Copy link
Member

sam17 commented Apr 21, 2017

@amanchandra333 Which code are you talking about?

@amanchandra333
Copy link

@sam17 the python script for the state machine.

@gauravgardi
Copy link
Member Author

gauravgardi commented Apr 21, 2017

@amanchandra333 the state machine diagram ash has drawn is just for reference for proceeding with the implementation. You can check whether your implementation is correct or not by matching the diagram generated by smach(which will be according to your implementation) with the diagram @ash-anand has drawn.
And where is the code you are referring to?

@amanchandra333
Copy link

@gauravgardi will be pushing it shortly after the endsems

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants