-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simulating UR in ROS #59
Comments
Hello, I guess you need to write your own Communicator e.g. ROScommunicator. Also later your own task. Look to https://github.com/kindredresearch/SenseAct/tree/master/docs Could you share your ROS/Gazebo code? |
@Murtaza786-oss Hi, that is a cool video you have there. I have been working on ROS/Gazebo for some time now too, I have been working on this repo https://github.com/cambel/ur3. I am using a ur3 but the changes needed to use a ur5 aren't that much. I'm also starting to create some gym environments based on openai_ros. So feel free to check it and I'd be glad to help you with any questions/issues |
@cambel Thank you for replying and for your comment, well I have started working with RL using PPO on my task. I saw your repo which you referred it seems that you are not using moveit-python interface for your task and you are defining the FK and IK manually while I am using the Moveit-python interface for doing the motion planning. Right now, I have defined my environment |
@Murtaza786-oss Yeah, I don't use moveit-python because on one hand, I was to it was a bit slower compared to a direct control approach. On the other hand, since I want the robot to move one step at a time and not with a predefined trajectory, a motion planner such as moveit is not necessary. Therefore, I use two different approaches to control the robot, each one with some pros and cons. Basically, the main controller I use is here https://github.com/cambel/ur3/blob/master/ur_control/src/ur_control/arm.py, there is one for the gripper too. |
@cambel Yeah makes a lot of sense! Which version of ROS and Python did you use for your project? I tried to do a catkin_make after cloning your repo but it is throwing me this error
|
@Murtaza786-oss I'm using python 3 right now, but with a few minor changes, it should work for python 2 as well. Open an issue in my repo if you have any troubles I'd happy to help |
@cambel Hi I cannot open a new issue in your rep because I don't see any option to open a new issue. By the way,
I am also sharing you the current training video of one of the UR5, it can been seen that the robot is colliding to itself and also to the table because there is no contact sensor installed [https://www.dropbox.com/s/b6bd5qjyj1bh9x3/Training_UR5_PPO.mp4?dl=0] |
@Murtaza786-oss I re-uploaded the repository so that now Issues can be created. I do not have a collision detector for the UR but I implemented a force/torque sensor at the end effector that you could virtually use to check if force is detected. But from what I can see from the video, it seems that you are using direct joint commands as actions of the robot. By experience, I would recommend you avoid that, it kind of does not work, there is no continuity to the motion of the robot. What I do instead of direct joint commands, is step joint commands, i.e., instead of joint_command = action use joint_command = current_joints + action. Then make the actions smaller, limit to something suitable for your task such as a maximum of 5cm per step or less. Then it would be easier to compute the forward kinematics of the expected new action to validate the position z of the end effector before the robot even move. |
@cambel Hi, have you finished the openai_ros setup for the ur3 robot? I am doing a project on UR5e. I am blocked when coding the task_envs. |
@ZZWang21 Hi, yes I have been using it for a while now. You can find a working demo here https://github.com/cambel/robot-learning-cl-dr |
Hello, I have been looking for such a repository for a long time. I am currently working on a project which involves stimulating and coordinating two UR5 in ROS/Gazebo for a pick and place task i.e. (the first robot picking up an object from a given position and moving to a certain position and the second robot moving to that position, taking the object from the first robot and going to the goal position). I am using Robotiq85 grippers for picking up an object and I am successfully able to control both the robots. till now I am successfully able to perform this task by hard-coding it, you can find the video here (https://www.youtube.com/watch?v=n6Vk9lIxKkg) but I want to perform this task using PPO for which I need to create an environment such that when the agent takes an action that actions is performed into the simulated world using Moveit Python interface (library which is used to control motions to the robot in Gazebo-simulated world). What sort of modification do I need to make in the environment and the agent in order to train the robot in the simulation?
The text was updated successfully, but these errors were encountered: