[R] How to pre-process fwf or csv files to remove unexpected characters in R?

Jeff Newmiller jdnewmil at dcn.davis.ca.us
Sun Nov 6 17:12:09 CET 2016

?readLines ... given the large size of file you may need to process chunks by specifying a file connection rather than a character string file name and using the "n" argument. 




There are many ways for data to be corrupted... in particular when invalid characters appear the possibilities explode, so more specifics are needed if this is not enough. Of course, reading the Posting Guide, posting with plain text to avoid HTML corruption, and giving reproducible examples will improve the quality of responses to those questions. 

Sent from my phone. Please excuse my brevity.

On November 6, 2016 5:36:46 AM PST, Lucas Ferreira Mation <lucasmation at gmail.com> wrote:
>I have some large .txt files about ~100GB containing a dataset in fixed
>width file. This contains some errors:
>- character characters in column that are supposed to be numeric,
>- invalid characters
>- rows with too many characters, possibly due to invalid characters or
>missing end of line character (so two rows in the original data become
>row in the .txt file).
>The errors are not very frequent, but stop me from importing with readr
>Is there some package, or workflow, in R to pre-process the files,
>separating the valid from the not-valid rows into different files? This
>be done by ETL point-click tools, such as Pentaho PDI. Is there some
>equivalent code in R to do this?
>I googled it and could not find a solution. I also asked this in
>StackOverflow and got no answer (here
>Lucas Mation
>IPEA - Brasil
>	[[alternative HTML version deleted]]
>R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
>PLEASE do read the posting guide
>and provide commented, minimal, self-contained, reproducible code.

More information about the R-help mailing list