--- title: "Lucid printing of floating-point vectors" author: "Kevin Wright" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: bibliography: lucid.bib vignette: > %\VignetteIndexEntry{Lucid printing of floating-point vectors} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ## Setup ```{r setup, results="hide"} library("knitr") opts_chunk$set(fig.align="center", fig.width=6, fig.height=6) options(width=90) ``` ## Abstract @farquhar1891economic provide a humorous quote about tables: > The graphic method has considerable superiority for the exposition of > statistical facts over the tabular. A heavy bank of figures is grievously > wearisome to the eye, and the popular mind is as incapable of drawing > any useful lessons from it as of extracting sunbeams from cucumbers. The `lucid` package intends to make your life easier by helping you extract information from tables. The package has functions for printing vectors and tables of floating-point numbers in a human-friendly format. An application is presented for printing of variance components from mixed models. ## Introduction Numerical output from R is often in scientific notation, which can make it difficult to quickly glance at numbers and understand the relative sizes of the numbers. This not a new phenomenon. Before R had been created, @finney1988data had this to say about numerical output: > Certainly, in initiating analyses by standard software or in writing one's > own software, the aim should be to have output that is easy to read and easily > intelligible to others. ... Especially undesirable is the so-called > 'scientific notation' for numbers in which every number is shown as a value > between 0.0 and 1.0 with a power of 10 by which it must be multiplied. For > example: > ``` > 0.1234E00 is 0.1234 > 0.1234E02 is 12.34 > 0.1234E-1 is 0.01234 > ``` > This is an abomination which obscures the comparison of related quantities; > tables of means or of analyses of variance become very difficult to read. It > is acceptable as a default when a value is unexpectedly very much larger or > smaller than its companions, but its appearance as standard output denotes > either lazy programming or failure to use good software properly. Like > avoidance of 'E', neat arrangement of output values in columns, with decimal > points on a vertical line, requires extra effort by a programmer but should be > almost mandatory for any software that is to be used often. One recommendation for improving the display of tables of numbers is to round numbers to 2 [@wainer1997improving] or 3 [@feinberg2011extracting, @clark1965presentation] digits for the following reasons: 1. We cannot *comprehend* more than three digits very easily. 2. We seldom *care* about accuracy of more than three digits. 3. We can rarely *justify* more than three digits of accuracy statistically. An alternative to significant digits is the concept of _effective digits_ [@ehrenberg1977rudiments, @kozak2011reporting], which considers the amount of variation in the data. In R, the `round()` and `signif()` functions can be used to round to 3 digits of accuracy, but those functions can still print results in scientific notation and leave much to be desired. The `lucid` package provides functions to improve the presentation of floating point numbers in a clear (or lucid) way that makes interpretation of the numbers immediately apparent. Consider the following vector of coefficients from a fitted model: ```{r echo=FALSE} df1 <- data.frame(effect=c(113.5, -13.5, 4.5, 24.5, 6.927792e-14, -1.75, 16.5)) rownames(df1) <- c("(Intercept)","A","B","C","C1","C2","D") print(df1) ``` Questions of interest about the coefficients might include: 1. Which coefficient is zero? 2. How large is the intercept? Both questions can be answered using the output shown above, but it takes too much effort to answer the questions. Now examine the same vector of coefficients with prettier formatting: ```{r message=FALSE} require("lucid") options(digits=7) # knitr defaults to 4, R console uses 7 lucid(df1) ``` Which coefficient is zero? How large is the intercept? Printing the numbers with the `lucid()` function has made the questions much easier to answer. The sequence of steps used by `lucid()` to format and print the output is. 1. Zap small numbers to zero using `zapsmall()`. 2. Round using 3 significant digits (user controllable option). 3. Drop trailing zeros. 4. Align numbers at the decimal point (text format). The `lucid` package contains a generic function `lucid()` with specific methods for numeric vectors, data frames, and lists. The method for data frames applies formatting to each numeric column and leaves other columns unchanged. The `lucid()` function is primarily a _formatting_ function, the results of which are passed to the regular `print()` functions. ## Example: Antibiotic effectiveness @wainer2009pictures present data published by Will Burtin in 1951 on the effectiveness of antibiotics against 16 types of bacteria. The data is included in the `lucid` package as a dataframe called `antibiotic`. The default view of this data is: ```{r} print(antibiotic) ``` Due to the wide range in magnitude of the values, nearly half of the floating-point numbers in the default view contain trailing zeros after the decimal, which adds significant clutter and impedes interpretation. The `lucid()` display of the data is: ```{r} lucid(antibiotic) ``` The `lucid()` display is dramatically simplified, providing a clear picture of the effectiveness of the antibiotics against bacteria. This view of the data matches exactly the appearance of Table 1 in @wainer2009pictures. A stem-and-leaf plot is a semi-graphical display of data, in that the _positions_ of the numbers create a display similar to a histogram. In a similar manner, the `lucid()` output is a semi-graphical view of the data. The figure below shows a dotplot of the penicillin values on a reverse log10 scale. The values are also shown along the right axis in `lucid()` format. Note the similarity in the overall shape of the dots and the positions of the left-most significant digit in the numerical values along the right axis. ```{r dotplot, echo=FALSE, message=FALSE, fig.height=4, fig.width=6} require(lattice) anti=antibiotic # make a copy of the data to reverse the levels anti$bacteria <- factor(anti$bacteria, levels=rev(anti$bacteria)) cust <- myyscale.component <- function(...) { #Custom y-scale function component ans <- yscale.components.default(...) ans$right <- ans$left foo <- ans$right$labels$at ans$right$labels$labels <- rev(c(" 870 "," 1 "," 0.001"," 0.005"," 100 ", " 850 "," 800 "," 3 "," 850 "," 1 ", " 10 "," 0.007"," 0.03 "," 1 ", " 0.001"," 0.005")) return(ans) } dotplot(bacteria~ -log10(penicillin), anti, cex=1, xlim=c(-4,4), #xlab="variance component (log10 scale)", scales=list(x=list(at= c(-2,0,2), labels=c('100','1','.01')), y=list(relation="free", fontfamily='mono')), # 'free' required for 2nd axis yscale.components=cust, #this creates more space on the right hand side of the plot par.settings=list(layout.widths=list(left.padding=10,right.padding=10)) ) ``` ## Example: Using `lucid` with `broom` The `broom` package by @robinson2016broom can be used to collect statistics from fitted models into tidy data frames. For example, using the `Orange` tree data, it is possible to fit a separate regression line for each tree. (The straight-line regression here is not entirely sensible, but illustrates a point.) ```{r, broom1} require(dplyr) require(broom) Orange %>% group_by(Tree) %>% do(tidy(lm(circumference ~ age, data=.))) ``` Extracting information from the sea of numbers above is difficult. The `lucid` function comes to the rescue, simply by adding one more step to the sequence of pipes. ```{r, broom2} Orange %>% group_by(Tree) %>% do(tidy(lm(circumference ~ age, data=.))) %>% lucid ``` After formatting, information in the table almost jumps out at the reader, reducing the amount of cognitive effort needed for interpretation. ## Example: Application to mixed models During the process of iterative fitting of mixed models, it is often useful to compare fits of different models to data, for example using loglikelihood or AIC values, or with the help of residual plots. It can also be very informative to inspect the estimated values of variance components. To that end, the generic `VarCorr()` function found in the `nlme` @pinheiro2014nlme and `lme4` @bates2014lme4 packages can be used to print variance estimates from fitted models. The `VarCorr()` function is not available for models obtained using the `asreml` @butler2009asreml package. The `lucid` package provides a generic function called `vc()` that provides a unified interface for extracting the variance components from fitted models obtained from the `asreml`, `lme4`, `nlme`, and `rjags` packages. The `vc()` function has methods specific to each package that make it easy to extract the estimated variances and correlations from fitted models and formats the results using the `lucid()` function. @pearce1988manual suggest showing four significant digits for the error mean square and two decimal places digits for $F$ values. The `lucid()` function uses a similar philosophy, presenting the variances with four significant digits and `asreml` $Z$ statistics with two significant digits. ### vc() example 1 - Rail data The following simple example illustrates use of the `vc()` function for identical REML models in the `nlme`, `lme4`, and `asreml` packages. The travel times of ultrasonic waves in six steel rails was modeled as an overall mean, a random effect for each rail, and a random residual. The package `rjags` is used to fit a similar Bayesian model inspired by @wilkinson2014oneway. ## nlme ```{r nlme} require("nlme") data(Rail) mn <- lme(travel~1, random=~1|Rail, data=Rail) vc(mn) ``` # lme4 ```{r lme4} require("lme4") m4 <- lmer(travel~1 + (1|Rail), data=Rail) vc(m4) ``` # asreml ```{r message=FALSE} # require("asreml") # ma <- asreml(travel~1, random=~Rail, data=Rail) # vc(ma) ## effect component std.error z.ratio constr ## Rail!Rail.var 615.3 392.6 1.6 pos ## R!variance 16.17 6.6 2.4 pos ``` # JAGS In a Bayesian model all effects can be considered as random. ```{r jags, eval=FALSE, echo=TRUE, message=FALSE} require("nlme") data(Rail) require("rjags") m5 <- "model { for(i in 1:nobs){ travel[i] ~ dnorm(mu + theta[Rail[i]], tau) } for(j in 1:6) { theta[j] ~ dnorm(0, tau.theta) } mu ~ dnorm(50, 0.0001) # Overall mean. dgamma() tau ~ dgamma(1, .001) tau.theta ~ dgamma(1, .001) residual <- 1/sqrt(tau) sigma.rail <- 1/sqrt(tau.theta) }" jdat <- list(nobs=nrow(Rail), travel=Rail$travel, Rail=Rail$Rail) jinit <- list(mu=50, tau=1, tau.theta=1) tc5 <- textConnection(m5) j5 <- jags.model(tc5, data=jdat, inits=jinit, n.chains=2, quiet=TRUE) close(tc5) c5 <- coda.samples(j5, c("mu","theta", "residual", "sigma.rail"), n.iter=100000, thin=5, progress.bar="none") ``` ```{r, eval=FALSE, echo=FALSE} vc(c5) ``` Compare the JAGS point estimates and quantiles (above) with the results from `lme4` below. ```{r} m4 ranef(m4) ``` While the `lucid()` function is primarily a formatting function and uses the standard `print()` functions in R, the `vc()` function defines an additional class for the value of the function and has dedicated `print` methods for the class. This was done to allow additional formatting of the results. ### vc() example 2 - Analysis of federer.diagcheck data The second, more complex example is based on a model in @federer2003proc in which orthogonal polynomials are used to model trends along the rows and columns of a field experiment. The data are available in the `agridat` package [@wright2014agridat] as the `federer.diagcheck` data frame. The help page for that data shows how to reproduce the analysis of @federer2003proc. When using the `lme4` package to reproduce the analysis, two different optimizers are available. Do the two different optimizers lead to similar estimated variances? In the output below, the first column identifies terms in the model, the next two columns are the variance and standard deviation from the 'bobyqa' optimizer, while the final two columns are from the 'NelderMead' optimizer. Note, these results are from `lme4` version 1.1-7 and are likely to be different than the results from more recent versions of `lme4`. The default output printing is shown first. ```{r echo=FALSE} # Results are from lme4_1.1-7, as.data.frame(VarCorr(m2b)) d1 <- structure(list(grp = c("new.gen", "one", "one.1", "one.2", "one.3", "one.4", "one.5", "one.6", "one.7", "one.8", "one.9", "one.10", "one.11", "one.12", "one.13", "Residual"), var1 = c("(Intercept)", "r1:c3", "r1:c2", "r1:c1", "c8", "c6", "c4", "c3", "c2", "c1", "r10", "r8", "r4", "r2", "r1", NA), var2 = c(NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_), vcov = c(2869.44692139271, 5531.57239635089, 58225.767835444, 128004.156092455, 6455.74953933247, 1399.72937329085, 1791.65071661348, 2548.88470543732, 5941.79076230161, 0, 1132.95013713932, 1355.22907294114, 2268.72957045473, 241.789424531994, 9199.9021721834, 4412.1096176349), sdcor = c(53.5672187199663, 74.3745413185916, 241.300161283502, 357.7766846686, 80.3476791160297, 37.4129572914365, 42.3278952537623, 50.4864804223598, 77.0830121511972, 0, 33.6593246684974, 36.8134360382339, 47.6311827530529, 15.5495795612613, 95.9161205021523, 66.4237127661116)), .Names = c("grp", "var1", "var2", "vcov", "sdcor"), row.names = c(NA, -16L), class = "data.frame") d2 <- structure(list(grp = c("new.gen", "one", "one.1", "one.2", "one.3", "one.4", "one.5", "one.6", "one.7", "one.8", "one.9", "one.10", "one.11", "one.12", "one.13", "Residual"), var1 = c("(Intercept)", "r1:c3", "r1:c2", "r1:c1", "c8", "c6", "c4", "c3", "c2", "c1", "r10", "r8", "r4", "r2", "r1", NA), var2 = c(NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_, NA_character_), vcov = c(3228.41890564251, 7688.13916836557, 69747.5508913552, 107427.043198654, 6787.00354507896, 1636.12771714548, 12268.4603217744, 2686.30159414561, 7644.72994565782, 0.00122514315152732, 1975.50482871438, 1241.42852423718, 2811.24084391436, 928.227473340838, 10363.5849610346, 4126.83169047631), sdcor = c(56.8191772700249, 87.6820344675326, 264.097616216722, 327.760649252856, 82.3832722406616, 40.4490756031022, 110.763081944186, 51.8295436420735, 87.4341463368736, 0.0350020449620779, 44.4466514904597, 35.23391156595, 53.02113582256, 30.4668257838069, 101.801694293536, 64.2404210017051)), .Names = c("grp", "var1", "var2", "vcov", "sdcor"), row.names = c(NA, -16L), class = "data.frame") out <- cbind(d1[, c(2,4,5)], sep=" ",d2[,4:5]) names(out) <- c('term','vcov-bo','sdcor-bo','sep','vcov-ne','sdcor-ne') ``` ```{r} print(out) ``` How similar are the variance estimates obtained from the two optimization methods? It is difficult to compare the results due to the clutter of extra digits, and because of some quirks in the way R formats the output. The variances in column 2 are shown in non-scientific format, while the variances in column 5 are shown in scientific format. The standard deviations are shown with 5 decimal places in column 3 and 8 decimal places in column 6. (All numbers were stored with 15 digits of precision.) The `lucid()` function is now used to show the results in the manner of the `vc()` function. ```{r} lucid(out, dig=4) ``` The formatting of the variance columns is consistent as is the formatting of the standard deviation columns. Fewer digits are shown. It is easy to compare the columns and see that the two optimizers are giving quite different answers. Note: The Bobyqa results are almost identical to the results obtained when using ASREML or SAS. Note: Data frames have no quotes, but numeric matrices are printed with quotes. Use `noquote()` to print without quotes, for example: ```{r } noquote(lucid(as.matrix(head(mtcars)),2)) ``` ## References