-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to connect to our existing remote HDFS and obtain metadata #5374
Comments
We conducted testing based on version 0.6.1-incubating |
Fileset is not to manage HDFS metadata, it's used to manage a mapping between the logic directory and the physical directory. |
Thank you for your reply. |
sorry, I couldn't get your point. Generally, you need to create a fileset that maps an HDFS directory, and you could read and write the HDFS data by using gvfs://xxx not hdfs://xx. |
Does creating a mapping refer to calling API to create a fileset corresponding to the HDFS directory. |
Exactly, a fileset can map a logical directory to a physical position and ignore the actual implementation. So, I wonder what's the problem you 're encountering. |
We attempted to connect to HDFS by entering IP and port in the location parameter, but were unable to retrieve catalogs and filesets, and there were no error messages in Gravitino. How should we configure it to successfully retrieve metadata?
The following image shows our configuration information, but we are unable to obtain HDFS metadata information through this configuration:
The text was updated successfully, but these errors were encountered: