tag:blogger.com,1999:blog-8619394560074447892.post4280995645228538678..comments2018-02-15T05:13:40.662-08:00Comments on Data Literacy - The blog of Andrés Gutiérrez: Multilevel regression with poststratification (Gelman's MrP) in R - What is this all about?Andrés Gutiérrezhttp://www.blogger.com/profile/02909754293370700858noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-8619394560074447892.post-87844995031553845082017-07-27T05:59:40.230-07:002017-07-27T05:59:40.230-07:00That is very interesting; you are a very skilled b...That is very interesting; you are a very skilled blogger. I have shared your website in my social networks! A very nice guide. I will definitely follow these tips. Thank you for sharing such detailed.<br /><br /><a href="www.traininginannanagar.in/android-training-in-chennai.html" rel="nofollow">Android Training in Chennai</a>raj kumarhttps://www.blogger.com/profile/17602457002387034984noreply@blogger.comtag:blogger.com,1999:blog-8619394560074447892.post-35120746926562304562017-07-25T00:34:23.486-07:002017-07-25T00:34:23.486-07:00It's interesting that many of the bloggers to ...It's interesting that many of the bloggers to helped clarify a few things for me as well as giving.Most of ideas can be nice content.The people to give them a good shake to get your point and across the command.<br /><br /><br /><a href="https://www.gangboard.com/big-data-training/data-science-training" rel="nofollow">Data Science Online Training</a>|<br /><a href="https://www.gangboard.com/business-intelligence-training/r-programming-training" rel="nofollow">R Programming Online Training</a>|<br /><a href="https://www.gangboard.com/big-data-training/hadoop-training" rel="nofollow">Hadoop Online Training</a><br />sai venkathttps://www.blogger.com/profile/07571041840254470721noreply@blogger.comtag:blogger.com,1999:blog-8619394560074447892.post-92134890361356898852017-01-25T06:37:00.667-08:002017-01-25T06:37:00.667-08:00Also, you can perform an EM algorithm as is stated...Also, you can perform an EM algorithm as is stated in pages 134 and 135 of the seminal paper http://www.statcan.gc.ca/pub/12-001-x/1997002/article/3616-eng.pdfAndrés Gutiérrezhttps://www.blogger.com/profile/02909754293370700858noreply@blogger.comtag:blogger.com,1999:blog-8619394560074447892.post-36726252558005740752017-01-25T06:24:45.853-08:002017-01-25T06:24:45.853-08:00Seba, Henrik ... thanks for sharing your questions...Seba, Henrik ... thanks for sharing your questions. The parameter of interest is defined to be: <br /><br />$\theta_h = \frac{\sum_{j \in h} N_j \mu_j }{\sum_{j \in h} N_j}$<br /><br />So, when the modeling follows a normal bayesian setup, the posterior distribution of $\theta_h$ is conjugate and then to compute its variance is straightforward. If the modeling is frequentist, it is just a matter of algebra as $\theta_h$ is a linear combination of $\mu_j$ for which a variance is associated. Then, by assuming independence among $\mu_j$'s, we have that:<br /><br />$Var(\theta_h) = \frac{\sum_{j \in h} N_j^2 Var(\mu_j) }{(\sum_{j \in h} N_j)^2}$<br /><br />Finally, $Var(\mu_j)$ can be found in M1 objectAndrés Gutiérrezhttps://www.blogger.com/profile/02909754293370700858noreply@blogger.comtag:blogger.com,1999:blog-8619394560074447892.post-15565496554014532462017-01-25T06:24:17.376-08:002017-01-25T06:24:17.376-08:00Seba, Henrik ... thanks for sharing your questions...Seba, Henrik ... thanks for sharing your questions. The parameter of interest is defined to be: <br /><br />$\theta_h = \frac{\sum_{j \in h} N_j \mu_j }{\sum_{j \in h} N_j}$<br /><br />So, when the modeling follows a normal bayesian setup, the posterior distribution of $\theta_h$ is conjugate and then to compute its variance is straightforward. If the modeling is frequentist, it is just a matter of algebra as $\theta_h$ is a linear combination of $\mu_j$ for which a variance is associated. Then, by assuming independence among $\mu_j$'s, we have that:<br /><br />$Var(\theta_h) = \frac{\sum_{j \in h} N_j^2 Var(\mu_j) }{(\sum_{j \in h} N_j)^2}$<br /><br />Finally, $Var(\mu_j)$ can be found in M1 objectAndrés Gutiérrezhttps://www.blogger.com/profile/02909754293370700858noreply@blogger.comtag:blogger.com,1999:blog-8619394560074447892.post-9511011385781763042017-01-17T05:50:36.887-08:002017-01-17T05:50:36.887-08:00Very interesting and informative blog post. Two th...Very interesting and informative blog post. Two things.<br /><br />1. As the previous person asks, what about CIs around the final estimate?<br /><br />2. Obtaining the predictions per random effect and fixed effect combination via predict seems unnecessarily complicated. What about:<br />Mupred <- t(coef(M1)$Zone)<br />Mupred[2,] <- Mupred[1,] + Mupred[2,]<br />Mupred[3,] <- Mupred[1,] + Mupred[3,]<br />Mupred<br />Henrik Singmannhttps://www.blogger.com/profile/02531881768360426790noreply@blogger.comtag:blogger.com,1999:blog-8619394560074447892.post-20137964579825583122017-01-16T05:46:07.193-08:002017-01-16T05:46:07.193-08:00What about the uncertainty of the estimates?What about the uncertainty of the estimates?Seba Dazahttps://www.blogger.com/profile/06834350496656861002noreply@blogger.com