Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

I will appreciate it if someone can give me some help with python code to do the

ID: 3804851 • Letter: I

Question

I will appreciate it if someone can give me some help with python code to do the following:

1) re-run an algorithm by using only the 4th dimension of xi. Set b = 5 and sigma_squared = 2.

2) Show a scatter plot of the data (x[4] versus y for each point).

3) Also, plot as a solid line the predictive mean of the Gaussian process at each point in the training set.

Dataset used:

Xtrain data (this data has 350 rows, 7 Columns)

Xtestdata    ( thins data has 42 Rows, 7 Columns)

Ytrain data - (this data has 350 rows, 1 Column)

Ytestdata -    (This data has 42 Rows, 1 Column)

b = [5,7,9,11,13,15]:      (## I know if the final solution I need to restrict b=5)
sigma squared = [.1,.2,.3,.4,.5,.6,.7,.8,.9,1]   (## I know if the final solution I need to restrict sigma squared to 2)

This is the original Algo used for the first part of my question after reading the above data: How do I change this to answer the question above.

def KN(b):

    rows = Xtraindata.shape[0]

    #print (rows)

    matrix = numpy.zeros((350,350))

    #print(matrix.shape)

    for i in range(rows):

        for j in range(rows):

            val = numpy.exp((-1/b)*(norm(Xtraindata[i]-Xtraindata[j])**2))

            matrix[i][j]=val

    return matrix

#calculating K(X, Dn)

def K(xTestEntry,b):

    rows=len(Xtraindata)

    #print (rows)

    matrix = numpy.zeros((1,rows))

    for i in range(rows):

            val = numpy.exp((-1/b)*(norm(Xtraindata[i]-xTestEntry)**2))

            matrix[0][i]=val

    return matrix

# Loping KN over b and sigma squared.

n = len(Xtraindata)

for b in [5,7,9,11,13,15]:

    for sigma in [.1,.2,.3,.4,.5,.6,.7,.8,.9,1]:

       kn=KN(b)

        ux=[]

        for xTestEntry in Xtestdata:

            kxdn =K(xTestEntry,b)

            idMat = numpy.identity(n)*sigma

            mid = numpy.linalg.inv(idMat + kn)

            x1 = numpy.dot(kxdn,mid)

            ypredict = numpy.dot(x1,Ytraindata)

            ux.append(ypredict[0])

        rms=sqrt(mean_squared_error(Ytestdata, ux))

        print(b,sigma,rms)

Explanation / Answer

## Sample with weak prior on rho
stan_data_rho_weak = copy.copy(stan_data)
stan_data_rho_weak['alpha_rho'] = 1
stan_data_rho_weak['beta_rho'] = 1/100.

## Sample with pystan
#stan_model_samp_rho_weak = stan_model_compiled.sampling(data = stan_data_rho_weak, iter=Niter, chains=Nchains, control=cdic)


## Sample with cmdstan
## Delete any old samples first
os.system('rm output_cmdstan_gp_rhoweak_samples*.csv')
stanhelper.stan_rdump(stan_data_rho_weak, 'input_data_rhoweak_final.R')
p = []
for i in range(Nchains):
    cmd = """
{0}user-models/gp_model_final
data file='input_data_rhoweak_final.R'
sample num_warmup={2} num_samples={2}
adapt delta={4}
algorithm=hmc engine=nuts max_depth={3}
random seed=1002 id={1}
output file=output_cmdstan_gp_rhoweak_samples{1}.cs
""".format(cmdstan_path, i+1, Niter/2, cdic['max_treedepth'], cdic['adapt_delta'])
    p += [subprocess.Popen(cmd, shell=True)]
for i in range(Nchains):
    p[i].wait()