[Rd] Sys.sleep() burns up CPU on Solaris 8

Stephen C. Pope scp at predict.com
Sat Apr 1 08:31:35 CEST 2006


I noticed that R was burning up 100% of a CPU when a call to Sys.sleep() 
was made. Upon investigation, I discovered that R_checkActivityEx() in 
src/unix/sys-std.c was putting the entire timeout (in usec) into the 
struct timeval tv_usec member, leaving tv_sec set to 0.

I don't know about other unix variants, but Solaris requires that the 
timeout value be normalized (i.e. 0 <= tv_usec < 1000000). Because a 
value greater than 1000000 was being placed in tv_usec, select() was 
returning an error (EINVAL) immediately, causing R_checkActivityEx() to 
return immediately. This caused do_syssleep() to loop continuously until 
the desired elapsed time had evolved. A rather sleepless sleep ;-).

The following patch against R-2.2.1 fixes the problem. Note that the 
test of the return value for R_SelectEx() was also incorrect; although 
harmless, it was treating the error return (-1) as a successful return.

stephen pope
scp at predict.com


Here's the patch.

build_main at mambo:/vobs/products/R> gdiff -ub src/unix/sys-std.c@@/main/3 
src/unix/sys-std.c
--- src/unix/sys-std.c@@/main/3 Thu Jan 12 11:39:55 2006
+++ src/unix/sys-std.c  Fri Mar 31 23:12:16 2006
@@ -294,13 +294,13 @@
         else onintr();
      }

-    tv.tv_sec = 0;
-    tv.tv_usec = usec;
+    tv.tv_sec = usec/1000000;
+    tv.tv_usec = usec % 1000000;
      maxfd = setSelectMask(R_InputHandlers, &readMask);
      if (ignore_stdin)
         FD_CLR(fileno(stdin), &readMask);
      if (R_SelectEx(maxfd+1, &readMask, NULL, NULL,
-                  (usec >= 0) ? &tv : NULL, intr))
+                   (usec >= 0) ? &tv : NULL, intr) > 0)
         return(&readMask);
      else
         return(NULL);



More information about the R-devel mailing list