Welcome to FROBS_RL documentation!
FROBS_RL (Flexible Robotics Reinforcement Learning Library) is a flexible robotics reinforcement learning (RL) library. It is primarly designed to be used in robotics applications using the ROS framework. It is written in Python and uses libraries based on the PyTorch framework to handle the machine learning. The library uses OpenAI Gym to create and handle the RL environments, stable-baselines3 to provide state-of-the-art RL algorithms, Gazebo to simulate the physical environments, and XTerm to display and launch many of the ROS nodes and processes in a lightweight terminal.
FRobs_RL is stored in a Github repository: https://github.com/jmfajardod/frobs_rl
Provide a framework to easily train and deploy RL algorithms in robotics applications using the ROS middleware.
Provide a framework to easily create RL enviroments for any type of task.
Provide a framework to easily use, test or create state-of-the-art RL algorithms in robotics applications.
This project is under active development.
- Welcome to FROBS_RL documentation!
- Enviroment creation
- Enviroment templates
- Training a model
- RL Models
- Using trained models
- Custom Robot enviroment
- Custom Task enviroment
- Basic Model
- Advantage Actor Critic (A2C)
- Deep Deterministic Policy Gradient (DDPG)
- Deep Q Network (DQN)
- Proximal Policy Optimization (PPO)
- Soft Actor Critic (SAC)
- Twin Delayed Deep Deterministic Policy Gradient (TD3)
- Normalize Action Wrapper
- Normalize Observation Wrapper
- Time Limit Wrapper