CRAN Package Check Results for Maintainer ‘Kamil Wais <kamil.wais at gmail.com>’

Last updated on 2025-05-18 19:50:55 CEST.

Package ERROR OK
GitAI 2 11
R4GoodPersonalFinances 13

Package GitAI

Current CRAN status: ERROR: 2, OK: 11

Version: 0.1.0
Check: tests
Result: ERROR Running ‘testthat.R’ [3s/4s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(rlang) Attaching package: 'rlang' The following objects are masked from 'package:testthat': is_false, is_null, is_true > library(GitAI) > > test_check("GitAI") [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] ══ Skipped tests (7) ═══════════════════════════════════════════════════════════ • OPENAI_API_KEY env var is not configured (5): 'test-process_content.R:3:3', 'test-process_content.R:17:3', 'test-process_repos.R:5:3', 'test-set_llm.R:4:3', 'test-set_llm.R:15:3' • On CRAN (2): 'test-add_files.R:22:3', 'test-set_repos.R:2:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-set_llm.R:36:3'): setting LLM with default provider ─────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project) at test-set_llm.R:36:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:60:3'): setting arguments for selected provider ───── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project, provider = "openai", model = "model_mocked") at test-set_llm.R:60:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "model_mocked", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:89:3'): setting LLM without system prompt ──────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(initialize_project("gitai_test_project")) at test-set_llm.R:89:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:105:3'): setting system prompt ─────────────────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. ├─GitAI::set_prompt(set_llm(my_project), system_prompt = "You always return only 'Hi there!'") at test-set_llm.R:105:3 2. └─GitAI::set_llm(my_project) 3. ├─rlang::exec(provider_method, !!!provider_args) 4. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 5. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 6. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 7. └─ellmer (local) `<S7_class>`(...) 8. ├─S7::new_object(...) 9. └─ellmer::Provider(...) [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.1.0
Check: tests
Result: ERROR Running ‘testthat.R’ [8s/17s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(rlang) Attaching package: 'rlang' The following objects are masked from 'package:testthat': is_false, is_null, is_true > library(GitAI) > > test_check("GitAI") [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] ══ Skipped tests (7) ═══════════════════════════════════════════════════════════ • OPENAI_API_KEY env var is not configured (5): 'test-process_content.R:3:3', 'test-process_content.R:17:3', 'test-process_repos.R:5:3', 'test-set_llm.R:4:3', 'test-set_llm.R:15:3' • On CRAN (2): 'test-add_files.R:22:3', 'test-set_repos.R:2:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-set_llm.R:36:3'): setting LLM with default provider ─────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project) at test-set_llm.R:36:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:60:3'): setting arguments for selected provider ───── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(my_project, provider = "openai", model = "model_mocked") at test-set_llm.R:60:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "model_mocked", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:89:3'): setting LLM without system prompt ──────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. └─GitAI::set_llm(initialize_project("gitai_test_project")) at test-set_llm.R:89:3 2. ├─rlang::exec(provider_method, !!!provider_args) 3. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 4. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 5. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 6. └─ellmer (local) `<S7_class>`(...) 7. ├─S7::new_object(...) 8. └─ellmer::Provider(...) ── Error ('test-set_llm.R:105:3'): setting system prompt ─────────────────────── Error in `Provider(name = name, model = model, base_url = base_url, params = params, extra_args = extra_args)`: Required Backtrace: ▆ 1. ├─GitAI::set_prompt(set_llm(my_project), system_prompt = "You always return only 'Hi there!'") at test-set_llm.R:105:3 2. └─GitAI::set_llm(my_project) 3. ├─rlang::exec(provider_method, !!!provider_args) 4. └─GitAI (local) `<fn>`(model = "gpt-4o-mini", seed = NULL, echo = "none") 5. └─GitAI:::mock_chat_method(...) at tests/testthat/setup.R:46:3 6. ├─rlang::exec(provider_class, !!!provider_args) at tests/testthat/setup.R:24:3 7. └─ellmer (local) `<S7_class>`(...) 8. ├─S7::new_object(...) 9. └─ellmer::Provider(...) [ FAIL 4 | WARN 5 | SKIP 7 | PASS 35 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Package R4GoodPersonalFinances

Current CRAN status: OK: 13