Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atom table limit hit if riak admin called regularly #1066

Open
Bob-The-Marauder opened this issue May 12, 2021 · 3 comments
Open

Atom table limit hit if riak admin called regularly #1066

Bob-The-Marauder opened this issue May 12, 2021 · 3 comments

Comments

@Bob-The-Marauder
Copy link
Contributor

One of our customers found an issue with KV 3.0.3 where the atom table kept becoming exhausted if riak admin is called regularly e.g. polling riak admin status for monitoring purposes. This was traced to a problem with relx in pre-OTP23 builds. We have filed the following PR erlware/relx#868

Here is a brief example where the atom count increases.

[root@localhost riak]# riak start
[root@localhost riak]# riak attach
Attaching to /tmp/erl_pipes/[email protected]/erlang.pipe.1 (^D to exit)

([email protected])1> erlang:system_info(atom_count).
52654
([email protected])2> [Quit]
[root@localhost riak]# riak admin cluster status
---- Cluster Status ----
Ring ready: true

+--------------------+------+-------+-----+-------+
|        node        |status| avail |ring |pending|
+--------------------+------+-------+-----+-------+
| (C) [email protected] |valid |  up   |100.0|  --   |
+--------------------+------+-------+-----+-------+

Key: (C) = Claimant; availability marked with '!' is unexpected
[root@localhost riak]# riak attach
Attaching to /tmp/erl_pipes/[email protected]/erlang.pipe.1 (^D to exit)

([email protected])2> erlang:system_info(atom_count).
52656

Although such a small increment should not really cause any issues, when riak admin status is polled regularly 24 hours/day, it slowly adds up until you finally hit the 1 million atom mark and Riak crashes. Current work around is to restart Riak before the atom count gets too high.

@martincox
Copy link
Contributor

Ahh interesting. I'm sure I've heard this same problem talked about before.

Sounds like it might be something along the lines of using list_to_atom/1 when creating a random maint shell name, which I think would occur everytime riak admin is called.

@martincox
Copy link
Contributor

Ignore me, didn't read properly - see that it's already been dug into and the guilty code found and fixed.

@Bob-The-Marauder
Copy link
Contributor Author

We made these changes locally and, although there does seem to be some improvement, it does not fix the issue. We're currently trying to find the source of the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants