[R] Hyperbolic code in R studio ?

Rui Barradas ru|pb@rr@d@@ @end|ng |rom @@po@pt
Sat Dec 21 18:14:39 CET 2019


Hello again,

Sorry, clicked <Send> before signing.

Rui Barradas

Às 17:02 de 21/12/19, Rui Barradas escreveu:
> Hello,
> 
> I agree with Jeff and also:
> 
> I don't want to be snarky but can't resit, the question's subject title 
> is provocative.
> 
> A hyperbole is the figure of speech that consists of exaggeration of 
> expression, enlarging the true dimension of things. In simplified terms, 
> hyperbole is the overtly exaggerated expression of an idea.
> 
> 
> *Minimal* reproducible examples must be, well, minimal.
> Do you really need all that code to show you are finding the generation 
> of a hyperbolic function difficult? If the question is on how to create 
> a function, why several functions?
> 
> Às 15:45 de 21/12/19, Jeff Newmiller escreveu:
>> It is not so much a question of "bother us" as it is of "following the 
>> Posting Guide" and "not interfering with an educational institution by 
>> helping to cheat".
>>
>> Please read the Posting Guide (espm about homework help) and next time 
>> change your mail program settings so you send plain text to the list 
>> (HTML formatted email can become unreadable on our end of your 
>> transmission).
>>
>> On December 21, 2019 1:33:16 AM PST, Aissata Kagnassi 
>> <tatikagnassi using gmail.com> wrote:
>>> Good morning everyone,
>>>
>>>
>>>
>>> I don't want to bother you. I'm new at using R. :)
>>>
>>> 1. I was wondering if someone could help me figure out why I can't
>>> generate
>>> the code to get a hyperbolic function.
>>>
>>>
>>>
>>> 2. My second question is, I generated the code. I don't have any
>>> problem
>>> with other distributions but I still can't get the graphics displayed.
>>>
>>>
>>>
>>> Here are the instructions for my exercise and here is the code I used:
>>>
>>>
>>>
>>> **Instructions**
>>>
>>> Project: hereafter the series of financial returns will be refered to
>>> as yt
>>> and the series of fundamentals as xt. Here are the questions you need
>>> to
>>> raise and answer:
>>>
>>> Part 1: maximum likelihood estimation, student test and goodness of
>>> fit.
>>>
>>> 1. Consider the following model:
>>>
>>> yt =μ+σεt, with εt a disturbance such that E[εt] = 0 and V[εt] = 1.
>>>
>>> *Estimate the parameters of the following distributions by maximum
>>> likelihood using the Yt:*
>>>
>>> • Gaussian distribution N(0,1)
>>>
>>> • Student-t distribution with parameter ν
>>>
>>> • A mixture of Gaussian distribution MN(φ, μ1, σ1, μ2, σ2)
>>>
>>> • A generalized hyperbolic distribution GH(λ, α, β, δ, μ).
>>>
>>>
>>>
>>> 2. *Test the parameters for their significance using a Student test.
>>> Are
>>> all the parameters statistically significant?*
>>>
>>>
>>>
>>> **My code:**
>>>
>>> For the first three distributions, I answered well for questions one
>>> and
>>> two which are in italics. But I have a problem with the last one.
>>>
>>> library(readxl)
>>>
>>> Data <- read_excel("Data_project.xlsx", sheet = 2, skip = 5, col_names
>>> =
>>> FALSE)
>>>
>>> #y variable to explain, Nikkei 225 index
>>>
>>> y <- Data[16]
>>>
>>> # x variable explanatory variable, Australian Dollar vs. US Dollar
>>>
>>> x <- Data[30]
>>>
>>>
>>>
>>> returns_y = diff(as.matrix(log(y)),1)
>>>
>>> returns_x = diff(as.matrix(log(x)),1)
>>>
>>> returnsy_std = scale(returns_y)
>>>
>>>
>>>
>>> #1. Estimate the parameters by maximum likelihood
>>>
>>> #Gaussian distribution N(0, 1)
>>>
>>>
>>>
>>> mu=0
>>>
>>> sigma=0.1
>>>
>>> para0 = c(mu,sigma)
>>>
>>>
>>>
>>> loglikG<-function(para,returns_y)
>>>
>>> {
>>>
>>>   mu = para[1]
>>>
>>>   sigma =  para[2]
>>>
>>>   print(para)
>>>
>>> logl=sum(log(dnorm((returns_y-mu)/sigma, 0, 1)/sigma))
>>>
>>>   return(-logl)
>>>
>>> }
>>>
>>>
>>>
>>> loglikG(para0,returns_y)
>>>
>>> fit <- optim(para0, loglikG, gr=NULL, returns_y,
>>> method="BFGS",hessian=T)
>>>
>>>
>>>
>>> paraopt<- fit[["par"]]
>>>
>>>
>>>
>>> Hessian = fit$hessian
>>>
>>> invh = solve(Hessian)
>>>
>>>
>>>
>>> t1= sqrt(invh[1,1])
>>>
>>> t2=sqrt(invh[2,2])
>>>
>>> testzmu = paraopt[1]/t1
>>>
>>> testzvar = paraopt[2]/t2
>>>
>>> print(testzmu)
>>>
>>> print(testzvar)
>>>
>>>
>>>
>>>
>>>
>>> #T-student
>>>
>>>
>>>
>>> para_t=c(0,0.012,5)
>>>
>>>
>>>
>>> loglikt <- function(para,returns_y){
>>>
>>>   mut=para[1]
>>>
>>>   sigmat=para[2]
>>>
>>>   nu=para[3]
>>>
>>> m=-sum(log(dt((returns_y-mut)/sigmat, df=nu)/sigmat))
>>>
>>>
>>>
>>>   return(m)
>>>
>>> }
>>>
>>>
>>>
>>> loglikt(para_t,returns_y)
>>>
>>>
>>>
>>> output_t= optim(para_t, loglikt, gr=NULL, returns_y, method="BFGS",
>>> hessian=TRUE)
>>>
>>> paraopt_t <- output_t[["par"]]
>>>
>>>
>>>
>>> Hessian_t = output_t$hessian
>>>
>>> invh_t = solve(Hessian_t)
>>>
>>>
>>>
>>> t1_t= sqrt(invh_t[1,1])
>>>
>>> t2_t=sqrt(invh_t[2,2])
>>>
>>> t3_t= sqrt(invh_t[3,3])
>>>
>>> testzmu_t = paraopt_t[1]/t1_t
>>>
>>> testzvar_t = paraopt_t[2]/t2_t
>>>
>>> testznu_t = paraopt_t[3]/t3_t
>>>
>>> print(testzmu_t)
>>>
>>> print(testzvar_t)
>>>
>>> print(testznu_t)
>>>
>>>
>>>
>>> #Mixture of Gaussian finding initial values
>>>
>>> library(LaplacesDemon)
>>>
>>> eps = 0.001
>>>
>>> tolerance = 0.95
>>>
>>> paraMG = c(-0.02,0.03,0.6,0.8,0.7)
>>>
>>>
>>>
>>> likehoodMG <- function(para,returnsy_std)
>>>
>>> {
>>>
>>>   muM12 = para[1:2]
>>>
>>>   sigmaM12 = para[3:4]
>>>
>>>   phi = para[5]
>>>
>>>   p = c(phi,1-phi)
>>>
>>> LM=sum(log(dnormm(returnsy_std,muM12,sigmaM12,p=p)))
>>>
>>>   mean_w = p[1]*muM12[1] + p[2]*muM12[2]
>>>
>>>   var_w = p[1]*sqrt(sigmaM12[1]) + p[2]*sqrt(sigmaM12[2])
>>>
>>> if((abs(mean_w)>tolerance) || (abs(var_w-1)>tolerance)){
>>>
>>>     return(NaN)
>>>
>>>   }
>>>
>>>   return(-LM)
>>>
>>> }
>>>
>>>
>>>
>>> likehoodMG(paraMG,returnsy_std)
>>>
>>>
>>>
>>> outputMG = optim(paraMG, likehoodMG, gr=NULL, returnsy_std, method =
>>> "L-BFGS-B", hessian=TRUE,
>>>
>>>                   lower = c(eps,eps,-Inf,-Inf,eps), upper =
>>> c(Inf,Inf,Inf,Inf,1-eps))
>>>
>>>
>>>
>>> paraoptMG = outputMG[["par"]]
>>>
>>>
>>>
>>> #Mixture of Gaussian
>>>
>>> #(0.000345,0.023306)
>>>
>>> paraM
>>> =c(0.000345,0.023306,0.0253514,0.0010000,0.8715856,2.7857329,0.9659020)
>>>
>>>
>>>
>>> likehoodM <- function(para,returns_y)
>>>
>>> {
>>>
>>>   muM1 = para[1]
>>>
>>>   sigmaM = para[2]
>>>
>>>   muM12 = para[3:4]
>>>
>>>   sigmaM12 = para[5:6]
>>>
>>>   phi = para[7]
>>>
>>>   p = c(phi,1-phi)
>>>
>>> LM=sum(log(dnormm((returns_y-muM1)/sigmaM,muM12,sigmaM12,p=p)/sigmaM))
>>>
>>>   return(-LM)
>>>
>>> }
>>>
>>>
>>>
>>> likehoodM(paraM,returns_y)
>>>
>>>
>>>
>>> outputM = optim(paraM, likehoodM, gr=NULL, returns_y, method =
>>> "L-BFGS-B",
>>> hessian=TRUE,
>>>
>>>                 lower = c(-Inf,eps,eps,eps,-Inf,-Inf,eps), upper =
>>> c(Inf,Inf,Inf,Inf,Inf,Inf,1-eps))
>>>
>>>
>>>
>>> paraoptM = outputM[["par"]]
>>>
>>>
>>>
>>> HM = outputM[["hessian"]]
>>>
>>> invHM = solve(HM)
>>>
>>>
>>>
>>> tm1 = sqrt(invHM[1,1])
>>>
>>> tm2 = sqrt(invHM[2,2])
>>>
>>> tm3 = sqrt(invHM[3,3])
>>>
>>> tm4 = sqrt(invHM[4,4])
>>>
>>> tm5 = sqrt(invHM[5,5])
>>>
>>> tm6 = sqrt(invHM[6,6])
>>>
>>> tm7 = sqrt(invHM[7,7])
>>>
>>>
>>>
>>> testtmum = (paraoptM[1]-0)/tm1
>>>
>>> testtsigmam = paraoptM[2]/tm2
>>>
>>> testtmum1 = (paraoptM[3])/tm3
>>>
>>> testtmum2 = paraoptM[4]/tm4
>>>
>>> testtvarm1 = paraoptM[5]/tm5
>>>
>>> testtvarm2 = paraoptM[6]/tm6
>>>
>>> testtphi = paraoptM[7]/tm7
>>>
>>> print(testtmum)
>>>
>>> print(testtsigmam)
>>>
>>> print(testtmum1)
>>>
>>> print(testtmum2)
>>>
>>> print(testtvarm1)
>>>
>>> print(testtvarm2)
>>>
>>> print(testtphi)
>>>
>>>
>>>
>>> #A generalized hyperbolic distribution GH(??, ??, ??, ??, ?).
>>>
>>> library(readxl)
>>>
>>> Data <- read_excel("Data_project.xlsx", sheet = 2, skip = 5, col_names
>>> =
>>> FALSE)
>>>
>>> #y variable to explain, Nikkei 225 index
>>>
>>> y <- Data[16]
>>>
>>> # x variable explanatory variable, Australian Dollar vs. US Dollar
>>>
>>> x <- Data[30]
>>>
>>>
>>>
>>> returns_y = diff(as.matrix(log(y)),1)
>>>
>>> returns_x = diff(as.matrix(log(x)),1)
>>>
>>> returnsy_std = scale(returns_y)
>>>
>>>
>>>
>>>
>>>
>>> #A generalized hyperbolic distribution GH(??, ??, ??, ??, ?).
>>>
>>>
>>>
>>> library(ghyp)
>>>
>>>
>>>
>>> para_gh= c(-0.00002,0.005,1,0,1,0.1,0.1) # mu,delta,alpha,beta,lambda
>>>
>>>
>>>
>>> loglikGH <- function(para,returns_y){
>>>
>>>   mugh=para[1]
>>>
>>>   sigmagh=para[2]
>>>
>>>   alpha=para[3]
>>>
>>>   beta = para[4]
>>>
>>>   delta=para[5]
>>>
>>>   chi=para[6]
>>>
>>>   lamda=para[7]
>>>
>>>   if(delta < abs(chi)){
>>>
>>>     return(10000)
>>>
>>>   }else{
>>>
>>>    return(-sum(log(dghyp(((returns_y-mugh)/sigmagh),object =
>>> ghyp(alpha,beta,delta,chi,lamda))/sigmagh)))
>>>
>>>   }
>>>
>>> }
>>>
>>>
>>>
>>>
>>>
>>> loglikGH(para_gh,returns_y)
>>>
>>>
>>>
>>> eps = 0.001
>>>
>>>
>>>
>>> outputGH= optim(para_gh, loglikGH, gr=NULL, returns_y, method =
>>> "L-BFGS-B",
>>> hessian=TRUE,
>>>
>>>                 lower = c(-Inf,eps,eps,eps,-Inf,-Inf,eps), upper =
>>> c(Inf,Inf,Inf,Inf,Inf,Inf,Inf))
>>>
>>>
>>>
>>> paraoptGH = outputGH[["par"]]
>>>
>>>
>>>
>>> HGH = outputGH[["hessian"]]
>>>
>>> invHGM = solve(HGH)
>>>
>>> tm1_H = sqrt(invHGM[1,1])
>>>
>>> tm2_H = sqrt(invHGM[2,2])
>>>
>>> tm3_H = sqrt(invHGM[3,3])
>>>
>>> tm4_H = sqrt(invHGM[4,4])
>>>
>>> tm5_H = sqrt(invHGM[5,5])
>>>
>>> tm6_H = sqrt(invHGM[6,6])
>>>
>>> tm7_H = sqrt(invHGM[7,7])
>>>
>>>
>>>
>>> testtmuH = (paraoptGH[1]-0)/tm1_H
>>>
>>> testtsigmaH = paraoptGH[2]/tm2_H
>>>
>>> testtalpha = (paraoptGH[3])/tm3_H
>>>
>>> testtbetha = paraoptGH[4]/tm4_H
>>>
>>> testtdelta = paraoptGH[5]/tm5_H
>>>
>>> testtvchi = paraoptGH[6]/tm6_H
>>>
>>> testtlambda = paraoptGH[7]/tm7
>>>
>>>
>>>
>>> print(testtmuH)
>>>
>>> print(testtsigmaH)
>>>
>>> testtalpha
>>>
>>> testtbetha
>>>
>>> testtdelta
>>>
>>> testtvchi
>>>
>>> testtlambda
>>>
>>>     [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
> 
> ______________________________________________
> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



More information about the R-help mailing list