Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About markovian environments #30

Open
shanlior opened this issue Sep 12, 2020 · 4 comments
Open

About markovian environments #30

shanlior opened this issue Sep 12, 2020 · 4 comments

Comments

@shanlior
Copy link

shanlior commented Sep 12, 2020

Hi,
thanks for the thorough implementation and making this code available, it really helps to understand the internal mechanisms of the SAC algorithm.

I have a question regarding the code in sac/sac/envs/gym_env.py -
At the file's header - you comment: " Rllab implementation with a HACK. See comment in GymEnv.init().", and then in the init() method, you write:

# HACK: Gets rid of the TimeLimit wrapper that sets 'done = True' when
# the time limit specified for each environment has been passed and
# therefore the environment is not Markovian (terminal condition depends
# on time rather than state).

I understand the point here, but I'm not sure I followed the implementation, as it seems to be an internal Gym code and is not found in the SAC code found in this repository.

Can you explain exactly what are you doing with the TimeLimit wrapper?
If you omit the done flag, do you still terminate the episode?

Specifically - in Gym's registration.py file the env class is wrapped with:

if env.spec.max_episode_steps is not None:
    from gym.wrappers.time_limit import TimeLimit
    env = TimeLimit(env, max_episode_steps=env.spec.max_episode_steps)

Furthermore, in the time_limit.py file -

def step(self, action):
    assert self._elapsed_steps is not None, "Cannot call env.step() before calling reset()"
    observation, reward, done, info = self.env.step(action)
    self._elapsed_steps += 1
     if self._elapsed_steps >= self._max_episode_steps:
         info['TimeLimit.truncated'] = not done
         done = True
     return observation, reward, done, info

If you omit these lines of code - how does the environment resets itself when the max_episode_steps flag is raised?

Thanks!

Lior

@haarnoja
Copy link
Owner

Hi Lior,

The environment is reset explicitly by the sampler here: https://github.com/haarnoja/sac/blob/master/sac/misc/sampler.py#L133

I hope this answers your question!

Cheers
Tuomas

@shanlior
Copy link
Author

Thanks for the reply!

So if I understand correctly, you do not bootstrap in case the path length exceeds the maximum allowed path length..

Best,

Lior

@haarnoja
Copy link
Owner

Hi, we do bootstrap when the path length exceeds the maximum length, because reaching the time limit does not mean that we enter a terminal state. We don't bootstrap if we reach any of the actual terminal states, for example if the humanoid falls to the ground.

Best
Tuomas

@shanlior
Copy link
Author

shanlior commented Sep 29, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants