[R] Handling ever growing data in svm predictions

Divyam divyamurali13 at gmail.com
Wed Sep 7 10:51:10 CEST 2011


Hi,

I am new to R and here is what I am doing in it now. I am using machine
learning technique (svm) to do predictions. The data that I am using is one
that is bound to grow perpetually. what I want to know is, say, I fed in a
data set with 5000 data points to svm initially. The algorithm derives a
certain intelligence (i.e.,output)  based on these 5000 data points. I have
an additional 10000 data points today. Now if i remove the first fed 5000
data and then feed in this new additional 10000 data, I want the algorithm
to make use of the intelligence derived from the initial data(5000 data
points) too while evaluating the new delta data points(10000) and the end
result to be an aggregated measure of the total 15000 data. This is
important to me from an efficiency point of view. If there are any other
packages in r that does the same (i.e., enable statistical models to learn
from the past experience continuously while deleting the prior data used
from which the intelligence is derived) kindly post about them. This will be
of immense help to me.

Thanks in advance.

divya

--
View this message in context: http://r.789695.n4.nabble.com/Handling-ever-growing-data-in-svm-predictions-tp3795602p3795602.html
Sent from the R help mailing list archive at Nabble.com.



More information about the R-help mailing list