Skip to content

Commit

Permalink
spark/databricks: out-of-catalog schema enumeration
Browse files Browse the repository at this point in the history
  • Loading branch information
detule committed Oct 26, 2023
1 parent ea2b935 commit 2c44151
Show file tree
Hide file tree
Showing 2 changed files with 22 additions and 1 deletion.
2 changes: 1 addition & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# odbc (development version)


* Spark SQL: Correctly enumerate schemas away from the current catalog (@detule, #614)
* Modify `odbcDataType.Snowflake` to better reflect Snowflake Data Types documentation (@meztez, #599).
* SQL Server: Specialize syntax in sqlCreateTable to avoid failures when
writing to (new) local temp tables. (@detule, #601)
Expand Down
21 changes: 21 additions & 0 deletions R/db.R
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,27 @@ setMethod(
}
})

# Spark SQL ----------------------------------------------------------------

#' @details Databricks supports multiple catalogs. On the other hand
#' the default implementation of `odbcConnectionSchemas` which routes through
#' `SQLTables` is likely to enumerate the schemas in the currently active
#' catalog only.
#'
#' This implementation will respect the `catalog_name` arrgument.
#' @rdname odbcConnectionSchemas
setMethod(
"odbcConnectionSchemas",
c("Spark SQL", "character"),
function(conn, catalog_name) {
res <- dbGetQuery(conn, paste0("SHOW SCHEMAS IN ", catalog_name))
if (nrow(res)) {
return(res$databaseName)
}
return(character())
}
)

# DB2 ----------------------------------------------------------------

setClass("DB2/AIX64", where = class_cache)
Expand Down

0 comments on commit 2c44151

Please sign in to comment.