[Rd] Why does the lexical analyzer drop comments ?

Romain Francois romain.francois at dbmail.com
Fri Mar 20 21:57:38 CET 2009


Peter Dalgaard wrote:
> Duncan Murdoch wrote:
>> On 3/20/2009 2:56 PM, romain.francois at dbmail.com wrote:
>>> It happens in the token function in gram.c:
>>> Â Â Â  c = SkipSpace();
>>> Â Â Â  if (c == '#') c = SkipComment();
>>>
>>> and then SkipComment goes like that:
>>> static int SkipComment(void)
>>> {
>>> Â Â Â  int c;
>>> Â Â Â  while ((c = xxgetc()) != '\n' && c != R_EOF) ;
>>> Â Â Â  if (c == R_EOF) EndOfFile = 2;
>>> Â Â Â  return c;
>>> }
>>>
>>> which effectively drops comments.
>>>
>>> Would it be possible to keep the information somewhere ?
>>> The source code says this:
>>> Â *Â  The function yylex() scans the input, breaking it into
>>>  *  tokens which are then passed to the parser.  The lexical
>>> Â *Â  analyser maintains a symbol table (in a very messy fashion).
>>>
>>> so my question is could we use this symbol table to keep track of, 
>>> say, COMMENT tokens.
>>> Why would I even care about that ? I'm writing a package that will
>>> perform syntax highlighting of R source code based on the output of the
>>> parser, and it seems a waste to drop the comments.
>>> An also, when you print a function to the R console, you don't get 
>>> the comments, and some of them might be useful to the user.
>>>
>>> Am I mad if I contemplate looking into this ? 
>>
>> Comments are syntactically the same as whitespace.  You don't want 
>> them to affect the parsing.
>
> Well, you might, but there is quite some madness lying that way.
>
> Back in the bronze age, we did actually try to keep comments attached 
> to (AFAIR) the preceding token. One problem is that the elements of 
> the parse tree typically involve multiple tokens, and if comments 
> after different tokens get stored in the same place something is not 
> going back where it came from when deparsing. So we had problems with 
> comments moving from one end of a loop the other and the like.
Ouch. That helps picturing the kind of madness ...

Another way could be to record comments separately (similarly to srcfile 
attribute for example) instead of dropping them entirely, but I guess 
this is the same as Duncan's idea, which is easier to set up.

> You could try extending the scheme by encoding which part of a 
> syntactic structure the comment belongs to, but consider for instance 
> how many places in a function call you can stick in a comment.
>
> f #here
> ( #here
> a #here (possibly)
> = #here
> 1 #this one belongs to the argument, though
> ) #but here as well
>
>>
>> If you're doing syntax highlighting, you can determine the whitespace by
>> looking at the srcref records, and then parse that to determine what 
>> isn't being counted as tokens.  (I think you'll find a few things 
>> there besides whitespace, but it is a fairly limited set, so 
>> shouldn't be too hard to recognize.)
>>
>> The Rd parser is different, because in an Rd file, whitespace is 
>> significant, so it gets kept.
>>
>> Duncan Murdoch
>>
>> ______________________________________________
>> R-devel at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>


-- 
Romain Francois
Independent R Consultant
+33(0) 6 28 91 30 30
http://romainfrancois.blog.free.fr



More information about the R-devel mailing list