Mike Meredith's home page
Please see the BCSS web site for details of the training workshops I'm involved with.
I've just been using Google to find basic information about key concepts in R, and found it difficult to find good sources. Many of the pages go into unnecessary detail, assume a background in programming in other languages, or use abstruse examples.
So some recommendations seem to be in order.
In this post I want to explore what is "closure" in SCR, why it is important, and what happens if it is violated. I'll also explore some ideas for open population models which can be used to mitigate closure issues.
Animal activity patterns are in fact aligned to daylight intensity - in particular, sunrise, sunset and zenith - not to the time on your clock or GPS unit or camera (Nouvellet et al, 2012). Since the times of these events vary with the time of year, using clock time instead of sun time can lead to wrong inferences. Nouvellet et al (2012) give the example of hunting behaviour of African wild dogs: impala are almost always taken before sunset and kudu after sunset, but this effect disappears if time is measured relative to 6pm instead of sunset.
Malayan Monster" back in 2011. Some solutions are described here.
In R, the class
A toy example
I'll generate some simulated data for a logistic regression model (as that generalises to a lot of our other models). We have four players with different degrees of skill and a continuous covariate which does nothing.
In the simplest model, the number available for detection at site \( i \), \( N_i \), is modelled as being drawn from a Poisson distribution with parameter \( \lambda \). This is the biological model. The observation model assumes that detection of each individual is a Bernoulli trial, individuals being detected with probability \( r \). The data do not show which individuals are detected, just whether the species was detected, and that is recorded if at least one individual is observed.
a recent post I described the limits on floating points numbers and the effect on probabilities close to 0 or 1. This can cause errors in manipulating probabilities, in particular in calculating likelihoods, which can become very small. We get around that by working with logs of probabilities. Multiplication is then easy (just add the logs), but addition and subtraction are more difficult. I proposed functions to add probabilities and to calculate 1 - p avoiding underflow or overflow.
In the course
of applying these ideas to maximum likelihood estimation in the
If each animal produces p signs per day and signs remain visible for t days, then sign density will be:
S = D x p x t
where D is the density of animals. If we can estimate p and t, we can calculate D from S. In this post I want to provide code for estimating the persistence time of signs for retrospective studies of decay.
The solution is to work with logarithms of probabilities instead of the actual values [log(p) instead of p], eg, we routinely work with log-likelihoods. Multiplying probabilities is then simply a matter of adding up the logs. But sometimes we need to add up probabilities or calculate the complement, 1 - p, and we need to do that without falling in the 0 abyss or smashing into the 1 wall.
Here's a simple example: we have 3 sites, visited 4 times per year for 2 years. This is usually shoehorned into a table with 8 columns for the visits, like this:
These look like counts, but the data could be detection/nondetection (0/1) data, wind speed at each visit, or the name of the observer.
SECR. Because markings differ on each side of the animal, cameras are usually set in pairs to simultaneously record both sides. I have just been looking at ways to analyse data from unpaired cameras, and was surprised to find that unpaired cameras can be preferable to paired cameras when the number of cameras is limited.
This was based on simulations with just one set of parameter values, but it does suggest that projects with limited resources should consider using unpaired cameras, as this allows more locations to be sampled.
Update 5 Nov 2018: See this new paper by Ben Augustine et al (2018) Spatial capture–recapture with partial identity: An application to camera traps, Annals of Applied Statistics 12 (1) 67-95
People seem to run into problems with different versions of the Mac OS, R, JAGS and the rjags package. The only way to stay sane is to use recent versions of all four.
Check your Mac OS version!
From the Apple menu, choose About This Mac; the version number appears below the name. Note whether you have v. 10.11 (El Capitan) or later. If you have an earlier version, upgrade your OS before going further.
I already had R and JAGS 3 installed, together with
last post we looked at a way to use conjugate distributions for several parameters via a Gibbs sampler. The output from this was an MCMC sample of random draws from the posterior distribution. We can produce similar MCMC samples without using conjugate distributions with a method often called "Metropolis-within-Gibbs".
The idea for the sampler was developed by Nicholas Metropolis and colleagues in a paper in 1953. This was before the Gibbs sampler was proposed, but it uses the same idea of updating the parameters one by one. A better name would be "componentwise random walk Metropolis sampler". The rules for the random walk ensure that a large number of samples will be a good description of the posterior distribution.
last post, conjugate distributions provide an easy way to calculate posterior distributions for a single parameter, such as detection of a species during a single visit to a site where it is present. If we have more than one unknown parameter in our model - as with a simple occupancy model, where we have detection and occupancy - we may still be able to use conjugacy via a Gibbs sampler.
Gibbs sampling works if we can describe the posterior for each parameter if we know all the other parameters in the model.
As our example, we'll use estimation of detection probability from data for repeat visits to a site which is known to be occupied by our target species. First, we'll describe the beta distribution, then see how that can be combined with our data. A discussion of priors will follow, and we'll finish with brief descriptions of conjugate priors for other types of data.
We'll use a simple occupancy model. It has just two parameters and both must between 0 and 1. That means that we can plot all possible combinations of the two parameters in a simple two-dimensional graph. As we'll see we need to add a third dimension, but three is still manageable.
Currently it has functions for estimating occupancy, abundance from closed captures, density from spatial capture-recaptures, and survival from mark-recapture data, plus a slew of functions for species richness and alpha and beta diversity.
It is intended to be used for (1) simulations and bootstraps, (2) teaching, and (3) introducing Bayesian methods. And it should work on all platforms: Windows, Linux, and Mac.
Faced with patches of suitable habitat surrounded by inhospitable terrain, or a large extent of habitat punctuated with patches of non-habitat, we had the choice of ML methods or one of the packages designed specifically for Bayesian SECR analysis, such as SPACECAP or SCRbayes. But then we are limited to the range of models provided by package authors: we don't have the flexibility to specify our own models that comes with WinBUGS, OpenBUGS or JAGS.
Here I present a way to incorporate patchy habitat into a BUGS/JAGS model specification.
Doing Bayesian Data Analysis: we use spinners to generate random values for continuous variables and introduce the concept of probability density.
We start off with simple spinners representing a uniform distribution over a range from, say, 0 to 0.5. We discuss the problems of attaching a probability to an exact value, which leads to probability of a range of values and hence probability density.
Before the advent of SECR methods, putting all your traps into a single cluster with minimal perimeter length made sense, as you needed to estimate the area trapped animals came from to get a density. SECR estimates density directly, without needing to estimate area, so a single, large cluster may no longer be advantageous.
I've seen this asserted a couple of times, in particular in Tobler and Powell (2013, p.110), and I've myself drawn circular home ranges when discussing the interpretation of the capture parameters, but I don't think it is a necessary assumption.
Capture-recapture methods (also know as mark-recapture or capture-mark-recapture) have been used to estimate the size of animal populations for many years: the first software package for analysis of this kind of data, CAPTURE (Otis et al 1978), is now 35 years old. Early methods did not use the spatial component in the data, the capture locations, and spatially explicit capture-recapture models (SECR, or just spatial capture-recapture, SCR) first appeared in 2004 (Efford 2004).
The idea is to provide an R function which is as easy to use as t.test but which gives not a mere p-value but the kind of output Bayesians are used to - posterior probability distributions. John's BESTmcmc function uses JAGS, but handles all the preliminaries automatically and produces a result in a simple format.
As soon as cameras with "data backs" came along in the early 90s, biologists realised that they could harvest data on the activity patterns of rare, secretive forest animals. Were they diurnal, nocturnal, crepuscular, or maybe cathemeral (active all around the clock)? More recently, people have tried to get clues about how species interact - competition or prey-predator interactions - from activity patterns, by examining the extent of overlap.
In our corner of the biological world, Martin Ridout and Matt Linkie published a paper (2009) on the activity patterns of tigers, clouded leopards and golden cats in Sumatra, with a lot of technical detail on how overlap could be quantified and confidence intervals estimated. They followed up (2011) with a paper on tigers and their prey, also in Sumatra. ...
This is often a silly question: the means of real populations are almost always different, even if the difference is microscopic. More useful would be to estimate the difference and the probability that it is big enough to be of practical importance. See the BEST software for a way to do this in R.
Sometimes we are presented with confidence intervals for each of the means. This happens in particular with the standard packages we use for wildlife data analysis, where the output includes confidence intervals for each coefficient or real value. Can we infer evidence of a difference from confidence intervals in the same way as for a p-value from a test of significance?
Sometimes people I talk to are worried because their data aren't normally distributed, and they believe that they can't use the usual techniques such as t-tests or ANOVA without first transforming the data to be normal, or they must resort to non-parametric methods.
There are many good reasons for transforming data or NOT using t tests or F tests, but non-normal data is not usually one of them!
A couple of people on a recent workshop had trouble with their AVG anti-virus software when installing JAGS 3.3.0.
This appears to be due to AVG's paranoia: see
Martyn Plummer's comment. No malware is detected by McAfee
AntiVirus Plus or Trend Micro Office Scan. See also information
on false positives at the
In ecology and wildlife studies, a lot of our spatial data takes the form of rasters rather than vector files. When you first add a raster in QGIS, you usually get a plain grey rectangle, or maybe just a grey outline on a white background, as most raster file formats have no styling information. To make sense of a raster, you need to change the style.
Here I'll give some hints for "quick-and-dirty" styling to display the contents of a raster. For a more detailed tutorial, see here.
In a recent post, I showed how to deal with "distance from..." data in GIS layers using the R packages for handling spatial information. The example I used there involved
Here we will see how to do the same thing in QGIS.
At our recent workshop on Geographical Information Systems (GIS) using Quantum GIS we had a number of people interested in working with radio telemetry or GPS data to model animal home ranges. The home range plugin for QGIS doesn't work with current versions, at least with Windows.
It is designed to pass data to R and get the adehabitat package to do the home range estimation and pass the result back to QGIS. QGIS uses Python code, and to get it to talk to R requires a bit of software called "RPy2". This was always difficult to set up on Windows, but Python has been upgraded and RPy2 no longer works. In any case, the adehabitat package has been replaced by new packages with a wider ranger of options.
So now it's better to prepare spatial data in QGIS, read the files into R, process with adehabitatHR, write the results to new files, and load into QGIS.
We recently ran a workshop on Geographical Information Systems (GIS) using Quantum GIS for ecologists and wildlife researchers. For many species, distance from water, a road, forest edge, or a settlement may be an important habitat variable.
For example, we may be using automatic cameras to investigate occupancy of sites by leopards. Probability of occupancy may depend on distance from the nearest road. Given vector layers with roads and camera locations, we want to do two things:
I sometimes need to put formulae into my web pages, and I've been exploring the use of MathJax.
In the past I've inserted the formula into MS Word with MS Equation 3.0, doing a screen capture, cropping the image to the formula I want, saving as a .GIF file, and then displaying it on the web page as an image. So I get something like this for the Poisson distribution:
That's not ideal. If I want to change anything, I have to start all over again from Word. It's also messy if I want to put something like into the text; for a start it doesn't line up properly. MathJax allows me to type the formula in LaTeX style directly into the HTML code for my web page.
I have a collection of data sets for use during workshops or just to play with when trying out new statistical techniques or computer code.
A big question is what format to use, and I've changed my mind on this several times already!
After looking at this blog post by John Mount I've decided to try using tab-separated files with a .tsv extension.