CRAN Package Check Results for Package scoringutils

Last updated on 2024-11-15 19:49:36 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 2.0.0 9.26 164.88 174.14 ERROR
r-devel-linux-x86_64-debian-gcc 2.0.0 7.24 144.41 151.65 OK
r-devel-linux-x86_64-fedora-clang 2.0.0 354.22 OK
r-devel-linux-x86_64-fedora-gcc 2.0.0 321.88 OK
r-devel-windows-x86_64 2.0.0 13.00 191.00 204.00 OK
r-patched-linux-x86_64 2.0.0 9.34 204.93 214.27 OK
r-release-linux-x86_64 2.0.0 8.14 200.83 208.97 OK
r-release-macos-arm64 2.0.0 107.00 OK
r-release-macos-x86_64 2.0.0 191.00 OK
r-release-windows-x86_64 2.0.0 13.00 192.00 205.00 OK
r-oldrel-macos-arm64 2.0.0 98.00 OK
r-oldrel-macos-x86_64 2.0.0 182.00 OK
r-oldrel-windows-x86_64 2.0.0 17.00 230.00 247.00 OK

Check Details

Version: 2.0.0
Check: tests
Result: ERROR Running ‘testthat.R’ [50s/58s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(scoringutils) scoringutils 2.0.0 introduces major changes. We'd love your feedback! <https://github.com/epiforecasts/scoringutils/issues>. To use the old version, run: `remotes::install_github('epiforecasts/scoringutils@v1.2.2')` This message is displayed once per session. > > test_check("scoringutils") [ FAIL 8 | WARN 0 | SKIP 13 | PASS 535 ] ══ Skipped tests (13) ══════════════════════════════════════════════════════════ • On CRAN (13): 'test-class-forecast.R:133:5', 'test-get-correlations.R:50:3', 'test-get-coverage.R:67:3', 'test-get-coverage.R:90:3', 'test-get-forecast-counts.R:63:3', 'test-helper-quantile-interval-range.R:104:3', 'test-helper-quantile-interval-range.R:158:3', 'test-pairwise_comparison.R:537:3', 'test-pairwise_comparison.R:545:3', 'test-plot_heatmap.R:9:3', 'test-plot_wis.R:24:3', 'test-plot_wis.R:35:3', 'test-plot_wis.R:47:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-metrics-binary.R:17:3'): correct input works ───────────────── Expected `assert_input_binary(observed, predicted)` to run without any conditions. i Actually got a <simpleError> with text: Assertion on 'observed' failed: Must have exactly 2 levels. ── Failure ('test-metrics-binary.R:29:3'): correct input works ───────────────── Expected `assert_input_binary(observed, predicted = 0.2)` to run without any conditions. i Actually got a <simpleError> with text: Assertion on 'observed' failed: Must have exactly 2 levels. ── Failure ('test-metrics-binary.R:33:3'): correct input works ───────────────── Expected `assert_input_binary(observed, matrix(predicted))` to run without any conditions. i Actually got a <simpleError> with text: Assertion on 'observed' failed: Must have exactly 2 levels. ── Error ('test-metrics-binary.R:48:3'): function throws an error for wrong input formats ── Error in `assert_input_binary(observed = observed, predicted = as.list(predicted))`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-metrics-binary.R:48:3 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─scoringutils:::assert_input_binary(observed = observed, predicted = as.list(predicted)) 8. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 9. └─checkmate::makeAssertion(x, res, .var.name, add) 10. └─checkmate:::mstop(...) ── Error ('test-metrics-binary.R:120:3'): function throws an error when missing observed or predicted ── Error in `assert_input_binary(observed, predicted)`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_error(brier_score(observed = observed), "argument \"predicted\" is missing, with no default") at test-metrics-binary.R:120:3 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─scoringutils::brier_score(observed = observed) 8. └─scoringutils:::assert_input_binary(observed, predicted) 9. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 10. └─checkmate::makeAssertion(x, res, .var.name, add) 11. └─checkmate:::mstop(...) ── Error ('test-metrics-binary.R:134:3'): Brier score works with different inputs ── Error in `assert_input_binary(observed, predicted)`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-metrics-binary.R:134:3 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. └─scoringutils::brier_score(observed, predicted = 0.2) 5. └─scoringutils:::assert_input_binary(observed, predicted) 6. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 7. └─checkmate::makeAssertion(x, res, .var.name, add) 8. └─checkmate:::mstop(...) ── Error ('test-metrics-binary.R:156:3'): Binary metrics work within and outside of `score()` ── Error in `assert_forecast(data)`: ! Checking `forecast`: Input looks like a binary forecast, but found the following issue: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─scoringutils::score(as_forecast_binary(df)) at test-metrics-binary.R:156:3 2. └─scoringutils::as_forecast_binary(df) 3. ├─scoringutils::assert_forecast(data) 4. └─scoringutils:::assert_forecast.forecast_binary(data) 5. └─cli::cli_abort(c(`!` = "Checking `forecast`: Input looks like a binary forecast, but\n found the following issue: {input_check}")) 6. └─rlang::abort(...) ── Error ('test-metrics-binary.R:171:3'): `logs_binary()` works as expected ──── Error in `assert_input_binary(observed, predicted)`: Assertion on 'observed' failed: Must have exactly 2 levels. Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-metrics-binary.R:171:3 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. └─scoringutils::logs_binary(observed, predicted) 5. └─scoringutils:::assert_input_binary(observed, predicted) 6. └─checkmate::assert_factor(observed, n.levels = 2, min.len = 1) 7. └─checkmate::makeAssertion(x, res, .var.name, add) 8. └─checkmate:::mstop(...) [ FAIL 8 | WARN 0 | SKIP 13 | PASS 535 ] Deleting unused snapshots: • get-correlations/plot-correlation.svg • get-coverage/plot-interval-coverage.svg • get-coverage/plot-quantile-coverage.svg • get-forecast-counts/plot-available-forecasts.svg • pairwise_comparison/plot-pairwise-comparison-pval.svg • pairwise_comparison/plot-pairwise-comparison.svg • plot_heatmap/plot-heatmap.svg • plot_wis/plot-wis-flip.svg • plot_wis/plot-wis-no-relative.svg • plot_wis/plot-wis.svg Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-clang