Quickly read and write a CSV file, usually as a part of staging or loading a larger object. This assumes that all files follow the comservatory specification.

quickReadCsv(
  path,
  expected.columns,
  expected.nrows,
  compression,
  row.names,
  parallel = TRUE
)

quickWriteCsv(
  df,
  path,
  ...,
  row.names = FALSE,
  compression = "gzip",
  validate = TRUE
)

Arguments

path

String containing a path to a CSV to read/write.

expected.columns

Named character vector specifying the type of each column in the CSV (excluding the first column containing row names, if row.names=TRUE).

expected.nrows

Integer scalar specifying the expected number of rows in the CSV.

compression

String specifying the compression that was/will be used. This should be either "none", "gzip".

row.names

For .quickReadCsv, a logical scalar indicating whether the CSV contains row names.

For .quickWriteCsv, a logical scalar indicating whether to save the row names of df.

parallel

Whether reading and parsing should be performed concurrently.

df

A DataFrame or data.frame object, containing only atomic columns.

...

Further arguments to pass to write.csv.

validate

Whether to double-check that the generated CSV complies with the comservatory specification.

Value

For .quickReadCsv, a DataFrame containing the contents of path.

For .quickWriteCsv, df is written to path and a NULL is invisibly returned.

Author

Aaron Lun

Examples

library(S4Vectors)
df <- DataFrame(A=1, B="Aaron")

temp <- tempfile()
.quickWriteCsv(df, path=temp, row.names=FALSE, compression="gzip")
#> NULL

.quickReadCsv(temp, c(A="numeric", B="character"), 1, "gzip", FALSE)
#> DataFrame with 1 row and 2 columns
#>           A           B
#>   <numeric> <character>
#> 1         1       Aaron