Skip to content

step_nzv() creates a specification of a recipe step that will potentially remove variables that are highly sparse and unbalanced.


  role = NA,
  trained = FALSE,
  freq_cut = 95/5,
  unique_cut = 10,
  options = list(freq_cut = 95/5, unique_cut = 10),
  removals = NULL,
  skip = FALSE,
  id = rand_id("nzv")



A recipe object. The step will be added to the sequence of operations for this recipe.


One or more selector functions to choose variables for this step. See selections() for more details.


Not used by this step since no new variables are created.


A logical to indicate if the quantities for preprocessing have been estimated.

freq_cut, unique_cut

Numeric parameters for the filtering process. See the Details section below.


A list of options for the filter (see Details below).


A character string that contains the names of columns that should be removed. These values are not determined until prep() is called.


A logical. Should the step be skipped when the recipe is baked by bake()? While all operations are baked when prep() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = TRUE as it may affect the computations for subsequent operations.


A character string that is unique to this step to identify it.


An updated version of recipe with the new step added to the sequence of any existing operations.


This step can potentially remove columns from the data set. This may cause issues for subsequent steps in your recipe if the missing columns are specifically referenced by name. To avoid this, see the advice in the Tips for saving recipes and filtering columns section of selections.

This step diagnoses predictors that have one unique value (i.e. are zero variance predictors) or predictors that have both of the following characteristics:

  1. they have very few unique values relative to the number of samples and

  2. the ratio of the frequency of the most common value to the frequency of the second most common value is large.

For example, an example of near-zero variance predictor is one that, for 1000 samples, has two distinct values and 999 of them are a single value.

To be flagged, first, the frequency of the most prevalent value over the second most frequent value (called the "frequency ratio") must be above freq_cut. Secondly, the "percent of unique values," the number of unique values divided by the total number of samples (times 100), must also be below unique_cut.

In the above example, the frequency ratio is 999 and the unique value percent is 0.2%.


When you tidy() this step, a tibble is returned with columns terms and id:


character, the selectors or variables selected


character, id of this step

Tuning Parameters

This step has 2 tuning parameters:

  • freq_cut: Frequency Distribution Ratio (type: double, default: 95/5)

  • unique_cut: % Unique Values (type: double, default: 10)

Case weights

This step performs an unsupervised operation that can utilize case weights. As a result, case weights are only used with frequency weights. For more information, see the documentation in case_weights and the examples on

See also

Other variable filter steps: step_corr(), step_filter_missing(), step_lincomb(), step_rm(), step_select(), step_zv()


data(biomass, package = "modeldata")

biomass$sparse <- c(1, rep(0, nrow(biomass) - 1))

biomass_tr <- biomass[biomass$dataset == "Training", ]
biomass_te <- biomass[biomass$dataset == "Testing", ]

rec <- recipe(HHV ~ carbon + hydrogen + oxygen +
  nitrogen + sulfur + sparse,
data = biomass_tr

nzv_filter <- rec %>%

filter_obj <- prep(nzv_filter, training = biomass_tr)

filtered_te <- bake(filter_obj, biomass_te)
any(names(filtered_te) == "sparse")
#> [1] FALSE

tidy(nzv_filter, number = 1)
#> # A tibble: 1 × 2
#>   terms            id       
#>   <chr>            <chr>    
#> 1 all_predictors() nzv_evI1V
tidy(filter_obj, number = 1)
#> # A tibble: 1 × 2
#>   terms  id       
#>   <chr>  <chr>    
#> 1 sparse nzv_evI1V