[R] reading in csv files, some of which have column names and some of which don't

Bert Gunter bgunter@4567 @end|ng |rom gm@||@com
Tue Aug 13 20:32:37 CEST 2019

Are these files of numerics? In other words, how would one know whether the
first line of a file of alpha data are headers or not?  read.table's Help
file contains some info that may or may not be relevant for your files also.

Assuming a criterion for distinction, one could simply read the first line
of a file, check the criterion, and then read it with or without headers as
appropriate. R can create default column names.  One could also use
readLines with connections, but I don't think this is necessary, though
maybe it's more elegant or faster.

Without knowing how to tell whether the first line is a header or not, I
have no clue. Maybe the filename/ suffix might tell you something.

-- Bert

Bert Gunter

"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )

On Tue, Aug 13, 2019 at 11:00 AM Christopher W Ryan <cryan using binghamton.edu>

> Alas, we spend so much time and energy on data wrangling . . . .
> I'm given a collection of csv files to work with---"found data". They arose
> via saving Excel files to csv format. They all have the same column
> structure, except that some were saved with column names and some were not.
> I have a code snippet that I've used before to traverse a directory and
> read into R all the csv files of a certain filename pattern within it, and
> combine them all into a single dataframe:
> library(dplyr)
> ## specify the csv files that I will want to access
> files.to.read <- list.files(path = "H:/EH", pattern =
> "WICLeadLabOrdersDone.+", all.files = FALSE, full.names = TRUE, recursive =
> FALSE, ignore.case = FALSE, include.dirs = FALSE, no.. = FALSE)
> ## function to read csv files back in
> read.csv.files <- function(filename) {
>     bb <- read.csv(filename, colClasses = "character", header = TRUE)
>     bb
> }
> ## now read the csv files, as all character
> b <- lapply(files.to.read, read.csv.files)
> ddd <- bind_rows(b)
> But this assumes that all files have column names in their first row. In
> this case, some don't. Any advice how to handle it so that those with
> column names and those without are read in and combined properly? The only
> thing I've come up with so far is:
> ## function to read csv files back in
> ## Unfortunately, some of the csv files are saved with column headers, and
> some are saved without them.
> ## This presents a problem when defining the function to read them: header
> = TRUE or header = FALSE?
> ## The best solution I can think of as of 13 August 2019 is to use header =
> FALSE and skip the
> ## first row of every file. This will sacrifice one record from each csv of
> about 80 files
> read.csv.files <- function(filename) {
>     bb <- read.csv(filename, colClasses = "character", header = FALSE, skip
> = 1)
>     bb
> }
> This sacrifices about 80 out of about 1600 records. For my purposes in this
> instance, this may be acceptable, but of course I'd rather not.
> Thanks.
> --Chris Ryan
>         [[alternative HTML version deleted]]
> ______________________________________________
> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

	[[alternative HTML version deleted]]

More information about the R-help mailing list