Calculate validation metric using all held back predictions at once

## Arguments

- model
an object of class

`train`

## Value

regression (`postResample`

) or classification (`confusionMatrix`

) statistics

## Details

Relevant when folds are not representative for the entire area of interest. In this case, metrics like R2 are not meaningful since it doesn't reflect the general ability of the model to explain the entire gradient of the response. Comparable to LOOCV, predictions from all held back folds are used here together to calculate validation statistics.

## Examples

```
library(caret)
#> Loading required package: lattice
data(cookfarm)
dat <- cookfarm[sample(1:nrow(cookfarm),500),]
indices <- CreateSpacetimeFolds(dat,"SOURCEID","Date")
ctrl <- caret::trainControl(method="cv",index = indices$index,savePredictions="final")
model <- caret::train(dat[,c("DEM","TWI","BLD")],dat$VW, method="rf", trControl=ctrl, ntree=10)
#> note: only 2 unique complexity parameters in default grid. Truncating the grid to 2 .
#>
global_validation(model)
#> RMSE Rsquared MAE
#> 0.08848113 0.13992098 0.06953367
```